Skip to main content

Full text of "Patterns and perspectives in environmental science : report prepared for the National Science Board, National Science Foundation"

See other formats


and Perspectives 
in Environmental 





and Perspectives 

in Environmental 


Report Prepared for the 

National Science Board 

National Science Foundation 








W. H. 0. I. 

Library of Congress Catalog Card Number 73-600219 

For sale by the Superintendent of Documents. U.S. Government Printing Office. Washington. DC. 20402 
Price: $7.30 Stock Number 3800-00147 


This report has been prepared as a companion volume, a 
supplement to the third annual report of the National Science Board, 
Environmental Science — Challenge for the Seventies (NSB 71-1), 
which was transmitted to the Congress by the President in June 
1971. It contains much of the information and interpretation that 
formed the basis for the conclusions and recommendations of the 
annual report. 

The present document makes no attempt to present a complete 
view of environmental science or a coherent description of the natural 
environment. These undertakings would be both impracticable and 
overambitious within the confines of a single volume. Rather, this 
volume is a compendium of the views and judgments of a large 
number of scientific leaders, addressed to a broadly representative 
array of topics that serve to illustrate, but not define, the scope and 
nature of environmental science today. 

The National Science Board is deeply grateful to these many 
individuals for their thoughtful, candid, and sometimes controversial 
opinions. In some cases the views expressed are in conflict with 
others contained in the report itself or held by other members of the 
scientific community. Hopefully these conflicting views will chal- 
lenge scientists to resolve these differences and will point out that 
there are, in fact, many areas in environmental science that demand 
substantial investigation before any degree of adequate understand- 
ing is achieved. It is these differences that contribute significantly 
to the "patterns and perspectives" and help to identify directions of 
needed scientific advance. 

In accepting this report and recommending its publication, the 
Board does not endorse all views contained herein, but hopes that 
the report will prove informative to the general reader, that it will 
provide useful insights to assist policymakers, whether in govern- 
ment or in private institutions, and that it will contribute to the 
discussions of scientists — students, teachers, and other professionals 
— who are most intimately concerned with the status and future 
progress of environmental science. 

H. E. Carter 


National Science Board 



The National Science Board owes a debt of gratitude 
to a large number of people for their assistance in the 
preparation of this report. The Board expresses its appre- 
ciation to all of them for a job well done. 

Valuable aid was provided by the consultants who 
gave freely of their opinions and served as a general 
sounding board regarding the handling of the final format 
of this report, as well as its companion volume, Environ- 
mental Science: Challenge for the Seventies. This group 
included: Dr. Julian R. Goldsmith (former member of the 
National Science Board), University of Chicago; Dr. Louis 
J. Battan, University of Arizona; Dr. John E. Cantlon, 
Michigan State University; Dr. Wilbert M. Chapman 
(deceased), Ralston Purina Company; Dr. Roger Revelle, 
Harvard University; and Dr. Gilbert F. White, University 
of Colorado. 

The major responsibility for preparation of this re- 
port was undertaken by the Staff of the National Science 
Foundation, working in consultation with the Board and 
the consultants. Dr. Lawton M. Hartman, III and Dr. 
Eugene W. Bierly directed the effort. The result is 
unique: a profile of a section of science that is rich in 
information, broad in scope, and explicit as to the present 
status of understanding. The National Science Board 
expresses its sincere appreciation for the dedicated work 
of these men. 

The report has been edited by Mrs. Patricia W. Blair, 
special editorial consultant to the Board. The Board 
recognizes that her conscientious effort to present each 
contribution in its best light has made the report much 
more valuable. 

The following staff members of the Foundation and 
others have contributed in various ways, and the Board 
acknowledges the time and effort that these people gave 
from their already busy offices: 

Dr. James R. Barcus, Program Director, Solar Terres- 
trial Program 

Dr. William E. Benson, Head, Earth Sciences Section 

Dr. John L. Brooks, Program Director, General Ecol- 
ogy Program 

Mrs. Josephine K. Doherty, Program Manager, Re- 
gional Environmental Systems Program 

Dr. H. Frank Eden, Program Director, Meteorology 

Dr. John E. Geisler, Associate Program Director, 
Meteorology Program 

Dr. Walter H. Hodge, Head, Ecology and Systematic 
Biology Section 

Dr. Phillip L. Johnson, Division Director, Environ- 
mental Systems and Resources 

Mrs. Joan M. Jordan, Assistant Program Director, 
General Ecology Program 

Dr. Edward J. Kuenzler, Program Director, Biological 
Oceanography Program 

Dr. Lawrence H. Larson, Program Director, Physical 
Oceanography Program 

Miss Roberta J. Mears, Assistant Program Director, 
Biological Oceanography Program 

Dr. John M. Neuhold, Program Director, Ecosystem 
Analysis Program 

Dr. Richard I. Schoen, Program Director, Aeronomy 

Mr. Peter H. Wyckoff, Program Manager, Weather 
Modification Program 

and, from outside the Foundation, 

Mrs. Eileen Cavanaugh, Smithsonian Center for Vio- 
lent Phenomena 

Dr. Charles Cooper, San Diego State University 

Miss Margaret Deane, Bureau of Occupational 
Health and Environmental Epidemiology, State of 

Mrs. Barbara L. Kendall, National Center for At- 
mospheric Research 

Dr. Allen Kneese, Resources for the Future 

Dr. William H. Matthews, Associate Director, Study 
of Critical Environmental Problems, M.I.T. 

The endless task of typing and filing of correspond- 
ence was handled efficiently and effectively by Mrs. Judith 
M. Curtis, NSF. Her accurate typing of material for this 
volume as well as the first, particularly in situations where 
time was at a premium, is much appreciated. 

Special credit must be given to Mr. John C. Holmes, Finally, the Board acknowledges its great debt to 

Head of the Printing and Reproduction Section, NSF. Mr. the approximately 150 scientists who responded to the 

Holmes has made a difficult job much easier through his Board's request for information and analysis. Their frank 

understanding of the problems and the positive approach and sometimes controversial papers are the basis for this 

he took in solving them. report. It could not have been written without them. To 

these busy scientists, who took precious time to furnish 

There are others who have helped this report to come information, the National Science Board is extremely 

into being who are not listed by name. To these people, grateful. It is these contributors who have made Patterns 

the Board offers its thanks. and Perspectives in Environmental Science a reality. 



A report on environmental science is at best 
a risky undertaking. As was noted in the third 
report of the National Science Board, Environ- 
mental Science — Challenge for the Seventies 
(NSF3 71-1): 

Environmental Science is conceived ... as 
the study of all of the systems of air, land, 
water, energy, and life that surround man. 
It includes all science directed to the system- 
level of understanding of the environment, 
drawing especially on such disciplines as 
meteorology, geophysics, oceanography, and 
ecology, and utilizing to the fullest the 
knowledge and techniques developed in such 
fields as physics, chemistry, biology, mathe- 
matics, and engineering. 

Indeed, the natural environment is so all-encom- 
passing, so complex, that any attempt at exposi- 
tion would appear doomed from the outset. 

This report has a more limited, but per- 
haps more crucial, purpose: to assemble, in 
one place, enough material to permit the iden- 
tification of fundamental patterns that might 
help in appraising the status of environmental 
science today. It seeks a basis for tentative 
assessments of: 

1. The availability of essential data and 
successful theoretical formulations; 

2. The present capability of environ- 
mental science to predict future events; and, 

3. The capacity of science to serve 
society in its growing concern with the condi- 
tion of the natural environment and what man 
is doing to it. 

To achieve this end, many leading environ- 
mental scientists were asked by the National 
Science Board to prepare informal statements on 
specific, assigned topics covering a representative 
sample of environmental phenomena. They were 

asked to include their personal opinions and judg- 
ments on the current status of scientific knowl- 
edge and understanding. This volume comprises 
a selection among the responses to those requests. 

In order that the document not be mis- 
understood, or be judged with reference to 
inappropriate criteria, several important ca- 
veats need to be stated. 

First, no attempt has been made to provide 
a complete description of the natural environ- 
ment. Rather, the topics have been selected to 
illustrate a fundamental feature of environmental 
science — namely, that interactions prevail among 
all environmental regimes. 

Second, the report does not attempt a defini- 
tive scientific review of environmental science. 
Such a review, representing the consensus of 
informed opinion, is probably not possible today 
and, at the very least, could not be undertaken 
without a massive team effort. Nor does the 
report attempt to duplicate the many excellent 
surveys that continue to be prepared on the status 
of individual disciplines within the "environ- 
mental sciences." 

Third, this volume is not primarily concerned 
with pollution, a subject of enormous environ- 
mental concern but one that is receiving extensive 
attention in many other places. 

Fourth, in preparing this report for publica- 
tion, it has not been feasible to update the original 
papers. Thus, the material is now nearly two 
years old. In most instances this does not affect 
the conclusions presented, even though advances 
in environmental science are being recorded at an 
increasing rate. 

Finally, it has been assumed, as a matter of 
policy, that all the material included in this report 
has a reasonable scientific basis, even though 
some of the opinions expressed may cause con- 
troversy among specialists, both contributors and 
others. In certain instances, differences of opinion 
will be observed in statements devoted to the 

same topic. It is hoped that any resulting con- 
troversy or disagreement will help to illustrate 
the present status of environmental science and, 
indeed, to generate constructive and extensive 
discussion among scientists. 

Specific attribution of papers and asso- 
ciated illustrative material has been deliberately 
avoided. The exigencies of the publishing sched- 
ule have not permitted authors an opportunity to 
review the edited product, and, where consistency 
could be assured, material from two or more 
papers have been combined; thus, while every 
effort has been made to retain scientific accuracy 
and individual style, authors should not be ex- 
pected to bear individual responsibility for the 
final version. Furthermore, in order to encourage 
candid opinions, contributors were told from the 
first that informality was sought and that indi- 
vidual acknowledgments would not be made. 

"Patterns and perspectives" in environ- 
mental science begin to emerge from a reading 
of the various papers in this report. Several ques- 
tions recur. How adequate are the experimental 
data that comprise an essential underpinning for 
scientific progress? To what extent does a satis- 

factory theoretical structure exist, as distinct from 
a largely qualitative understanding? How mature 
are attempts at mathematical modeling? How 
adequate is the scientific basis for environmental 
control? Has environmental science reached the 
point where regulatory standards can be formu- 
lated in terms of demonstrated benefits and 
costs to society? What further scientific activity 
is needed? What needs to be done? 

The National Science Board, in its third re- 
port, sought the broad outlines of the answers to 
such questions at this point in time. Its findings 
and recommendations comprise the first volume 
of the third report, a summary of which is 
appended. It is hoped that the publication of 
these papers — the raw material, so to speak — 
will help to generate further discussion of the 
topics covered and their implications for environ- 
mental science as a whole, its organization and 
staffing, its choice of priorities, its methods of 
investigation, and the extent of established infor- 
mation and theory that can serve as the founda- 
tion of future progress. In this case, as in so 
many others, discussion and controversy are an 
important prelude to action. 








Elements of the Solar-Terrestrial 

System 3 

Terrestrial Effects of Solar Activity . . 13 


1. Deep Earth Processes 21 

An Overview of Deep-Earth 

Chemistry and Physics 21 

A Note on the Earth's Magnetic Field 23 

2. Continental Structures and Processes 

and Sea-floor Spreading 26 

Continental Drift and Sea-floor 

Spreading 26 

Practical Implications of Major 

Continental Processes 32 

3. Earthquakes 35 

Earthquake Prediction and Prevention 35 

4. Volcanoes 40 

Volcanoes and Man's Environment . . 40 
Aspects of Volcanic Science 44 


1. Cyclical Behavior of Climate 51 

Long-term Temperature Cycles 

and Their Significance 51 

Fluctuations in Climate over Periods 

of less than 200 Years 55 

Environmental Cyclic Behavior: The 
Evidence of Tree Rings and Pollen 
Profiles 59 

2. Causes of Climatic Change 62 

Basic Factors in Climatic Change .... 62 
The Radiation Balance 65 

Climatic Change and the Effects of 

Civilization 69 

Environmental Change in Arid 

America 73 


1. Oceanic Circulation and Ocean- 
Atmosphere Interactions 77 

Oceanic Circulation and the Role 

of the Atmosphere 77 

On Predicting Ocean Circulation .... 79 

Hydrodynamic Modeling of 

Ocean Systems 81 

Effects of Antarctic Water on 

Oceanic Circulation 83 

Tropical Air-Sea Rhythms 84 

2. Atmospheric Circulation 89 

Modeling the Global Atmospheric 

Circulation 89 

3. Weather Forecasting 93 

Short-, Medium-, and Long-Term 

Forecasting 93 

Long-Range Weather Forecasting ... 97 

Short-Term Forecasting, including 
Forecasting for Low-Altitude 
Aviation 101 

4. Clear Air Turbulence 105 

Clear Air Turbulence and 

Atmospheric Processes 105 

Prediction and Detection of 

Wave-Induced Turbulence 107 

A Note on Acoustic Monitoring 112 

5. Urban Effects on Weather and Climate . . . 113 

Urbanization and Weather 113 

The Influence of Urban Growth on 

Local and Mesoscale Weather 116 

Urban Effects on Weather — the 

Larger Scales 118 




1. Hurricanes 123 

The Origin of Atlantic Hurricanes . . . 123 

A report on Project STORMFURY: 
Problems in the Modification of 
Hurricanes 127 

The Scientific Basis of Project 


A Note on the Importance of 

Hurricanes 132 

Geomorphological Effects of 

Hurricanes 133 

2. Tornadoes 137 

Status of Tornado Research 137 

Tornadoes — Their Forecasting 

and Potential Modification 144 

Tornado Forecasting and Warning . . 146 

3. Hail 149 

Hailstorm Research and Hail 

Suppression 149 

Current Status of Hail Prevention . . . 154 

4. Lightning 157 

Basic Processes of Lightning 157 

Reduction of Lightning Damage 

by Cloud Seeding 160 


1. Drought 165 

The Causes and Nature of Drought 

and its Prediction 165 

2. Precipitation Modification 169 

Artificial Alteration of Natural 

Precipitation 169 

The Status of Precipitation 

Management 172 

3. Fog 180 

Modification of Warm and Cold Fog . 180 
Fog Dispersal Techniques 182 

4. Tropical Weather 184 

Monsoon Variations and Climate 

and Weather Forecasting 184 


Tropical Meteorology, with Special 

Reference to Equatorial Dry Zones. 187 

5. Dust 191 

African Dust and its Transport into 

the Western Hemisphere 191 


1. Water Resources 197 

Estimating Future Water Supply 

and Usage 197 

Water Movement and Storage in 

Plants and Soils 200 

A Note on Subsidence and the 
Exhaustion of Water-Bearing and 
Oil-Bearing Formations 203 

2. Forestry 205 

Water Quality in Forests 205 

Factors Relating Forest Management 

to Water Quality 212 

3. Agriculture 215 

Global Food Production Potentials . . . 215 
The Hazard of Drought 218 


1. Component Relationships 225 

Trophic Dynamics, with Special 

Reference to the Great Lakes 225 

Effects of Artificial Disturbances on 

the Marine Environment 230 

Marine Flora and Fauna in the 

Antarctic 231 

Systems Approaches to Understanding 
the Oceans and Marine Productivity 233 

2. Oceanic Production 236 

Primary Plant and Animal Life in the 

World Ocean 236 

The Southern Oceans in the 

Production of Protein 240 

Scientific Aspects of North Pacific 

Fisheries 242 

Some Scientific Problems Associated 

with Aquatic Mammals 245 

3. Estuaries and Coastal Zones 248 


The Relationship of Fisheries to 
Estuaries, with Special Reference 

to Puget Sound 248 

Prospects for Aquaculture 250 

4. Dynamics of Lakes 254 

Lake Circulation Patterns 254 

The Effects of Thermal Input on 

Lake Michigan 257 

5. Lake Eutrophication and Productivity .... 261 

Fishery Deterioration in the Great 

Lakes 261 

Problems or Eutrophication in the 

Great Lakes 267 

Pollution and Recovery in Lake 

Washington 270 


1. Component Relationships 277 

Environmental Design 277 

Maintenance of the Biosphere, with 

Special Reference to Arid Lands . . . 280 

Energy Relationships in Ecological 

Systems 285 

A Note on Soil Studies 291 

2. Forest Ecosystems 292 

The Forest as an Ecosystem 292 

A Note on Hubbard Brook 293 

Tropical Forests 295 

Comparison of Temperate and 

Tropical Forests 298 

3. Forest Animals 302 

Problems of Animal Ecology in 

Forested Areas 302 

Wilderness as a Dynamic Ecosystem, 
with Reference to Isle Royale 

National Park 303 

4. Forest Fire 306 

Research into Fire Ecology 306 

The Role of Fire in Forest 

Management and Ecology 308 

5. Polar Ecosystems 313 

Polar Flora and Vegetation 313 


Effects of Environmental Pollutants 
and Exposures on Human Health 
and Well Being 319 

1. Airborne Chemicals 329 

Chemical Contaminants in the 

Atmosphere 329 

Atmospheric Contaminants and 

Development of Standards 332 

Modeling the Atmosphere 335 

Problems in the Ecology of Smog .... 337 

2. Airborne Biological Materials 339 

Atmospheric Dispersal of Biologically 
Significant Materials 339 

Biological Monitoring Techniques for 

Measuring Aeroallergens 345 

3. Pests and Pesticides 350 

Environmental Pollution and Pesticides 350 
Pesticides and the Pollution Problem . 354 

4. Marine Contaminants 357 

Effects on the Ocean of Atmospheric 
Circulation of Gases and Particulate 
Matter 357 

Oil on the Sea Floor 361 

5. Environmental Disease 364 

Malaria 364 

Other Parasitic Diseases 367 


Genetic Adaptation to the 

Environment 373 

Aspects of Man's Adaptation in the 

Tropics 378 

Adaptation to High Altitude 379 

Adaptation to Smog and Carbon 

Monoxide 385 

APPENDIX: Summary and Recommendations .... 391 


INDEX 401 










II— 1 

1 1-3 
II— 5 

1 1-6 


III— 1 : 


1 1 1-3: 

Solar Flare 4 

The Interplanetary Medium 6 

The Magnetosphere 7 

The Ionosphere 9 

Atmospheric Temperature Distribution 12 


Regions of the Earth's Interior 21 

Chronology of Earth's Magnetic Field Reversals 24 

Six Shifting Plates of the Earth 27 

Continental Drift 33 

Seismicity of the Earth 36 

The Upper Mantle in the Region of 

Fiji-Tonga-Raratonga 37 

Seismic Risk in the United States 38 

U.S. Volcanoes 45 



III— 7: 





Average Water Level in Lake Victoria 

Changes in the Temperature of the 

Ocean Surface 

Temperature Curves Derived from Oxygen 

Isotope Ratios of Deep-Sea Cores 

Variations of the Mean Annual Temperature 

of the Northern Hemisphere 

Precipitation Patterns from Tree Rings 

Computer Simulation of Sea-Level 

Pressure Field 

Factors in the Radiation Balance of the Earth 
Observed Lagged Temperature Variation of 

the Northern Hemisphere 

Lagged Temperature Curve for the Northern 

Hemisphere Corrected for CO L . 

Lagged Temperature Curve for the Northern 
Hemisphere Corrected for COj and Dust . . . 










I V-l : Sea-Surface Temperatures 77 

IV-2: Classification of Waves and Currents 81 

IV-3: Antarctic Waters and Their Circulation S3 

I V-4 : Canton Island Data 85 

IV-5: Walker's "Southern Oscillation" 86 

IV-6 : SIRS Sounding 90 

IV-7: Availability of Upper Air Data 92 

IV-8: Data Required for Forecasts 94 

IV-9: Forecasting Skill 99 

IV-10: Waves and Turbulence in the Clear 

Atmosphere 110 

IV-11: Weather Changes Resulting from Urbanization 114 

IV-12: Heat Island Effect 117 


V-l : A History of Hurricane Seedlings 124 

V-2: Hurricane Beulah, 1967 124 

Figure Page 

V-3: Probability Forecasts for Hurricanes 125 

V-4 : Hurricane Losses by Years 127 

V-5 : Hurricane Camille, 1969 134 

V-6: Comparative Losses due to Severe Storms 

and Hurricanes 137 

V-7: Radar View of a Hooked Echo 139 

V-8: Contour-mapped PPI Display 142 

V-9: Contour-mapped Digital Display 143 

V-10: Severe Weather Warning 147 

V-ll : Structure of Hailstone Embryos 150 

V-12: Hail Suppression at Kericho, Kenya 153 

V-13: A Midwest Thunderstorm 155 

V-14 : Lightning 157 

V-15: The Initiation of a Lightning Stroke 159 



VI-13 : 

Annual Worldwide Precipitation 166 

Precipitation Processes 169 

Lattice Structures of Agl and Ice 173 

Temperature Dependence of Nucleating Agents 174 

Optimum Seeding Conditions 176 

Simulated Effect of Cloud Seeding 177 

Concentration of Ice Nuclei in a City 179 

A Driving Hazard 180 

Results of Fog-Seeding Programs 182 

Monsoonal Areas 184 

Array for Barbados Oceanographic and 

Meteorological Experiment (BOMEX) 186 

Frequency of Tropical Cyclones 189 

Dust over the Tropical Atlantic 191 



Disposition of Water Diverted for Irrigation 

Purpose 199 

VII-2: The Hydrologic Cycle 201 

VII— 3: Subsidence in Long Beach, California 203 

VII-4: Ownership of U.S. Forest Lands 205 

VII-5 : Effects of Forest Fires 208 

VII-6: Relation of Sediment Particle-Size to Flow Rate 210 
VII— 7: Effect of Land Use on Sediment Yield and 

Channel Stability 211 

VII-8: Potentially Arable Land in Relation to 

World Population 215 

VII-9: Transplanted Species 216 

VII-10: Comparative Perceptions of Feasible 

Adjustments to Drought 219 


VIII-1 : Trophic Levels 225 

VIII— 2 : Effect of Alewives on Zooplankton 229 

VIII— 3 : Sensitivity of Phytoplankton to Insecticides . . 234 

VIII-4 : Some Phytoplankton 237 

VIII-5 : Some Zooplankton 238 

VIII-6: An Antarctic Food Chain 239 

VIII-7: Distribution of the World's Fisheries 243 

VIII-8: The Fate and Distribution of Marine Pollutants 245 

VIII-9: A Purse Seine 246 


Figure Page 

VIII— 10 : Cost of Economic Activities in 

Corpus Christi Bay 249 

VIII— 11 : Scheme for Using Sewage in Aquaculture .... 252 

VIII-12: Upwelling of Coastal Lake Waters 256 

VIII— 13 : Thermal Influence of Electric Power 

Generation on Lake Michigan 258 

VIII— 14 : Commercial Fish Catch — Lake Michigan .... 261 
VIII-15: The Effect of Fertilizer on Nitrate 

Concentrations in Rivers 265 

VIII— 16 : Transparency Measurements in 

Lake Washington 271 

VIII— 17 : Measurements of Algae, Phosphorus, and 

Nitrogen in Lake Washington 273 


IX— 1: Serai Stages of a Deciduous Forest 277 

IX-2: A Systems Model for a Grassland Ecosystem . . 279 

IX-3 : Mosquito Submodel 282 

IX-4: A Model Validation Study 284 

IX-5: Major World Biomes 286 

IX-6: Plant-Mouse- Weasel Chain 287 

IX-7: Energy Budget of a Horse 288 

IX-8: Relation Between Food Intake and Calorific 

Equivalence of Invertebrates 290 

IX-9: Ecological Effects of Deforestation 294 

IX-10: The Effect of Tree Cover Removal in 

the Tropics 297 

IX— 11: Comparison of Temperate and Tropical 

Forest Types 299 

IX-12: Life Expectancy and Survivorship of 

Isle Royale Moose 305 

IX-13: Effect of Fire on Mesquite Shrubs 306 

IX-14: Quantities of Nutrients Released by Burning 

Tropical Vegetation 309 

IX-15: A Section of the Tundra Biome 313 

IX-16: Flow Diagram of a Wet Coastal Tundra 

Ecosystem 315 


X-l : Composition of Clean, Dry Air 329 

X-2: Pollution — An Environmental Problem 331 

X-3 : Atmospheric Scales 332 

X-4: A System for Discussing Air Pollution 333 

X-5: Projection of Physical, Economic, and 

Social Relationships 338 


X-6: Atmospheric Particulate Matter Impc 

in Aerobiology 

X-7: Components of a Model for Pollen Aerol 
X-8: Average Annual Losses from Crop Di 

in the United States 342 

X-9: Distribution of Ragweed Pollen in the 

United States 346 

X-10: Efficiency of Cylindrical Collectors 

for Ragweed Pollen 348 

X— 11: Resistance of Insects and Mites to Pesticides . . 350 

X-12: Pesticide Usage and Agricultural Yields 351 

X-13: Resurgence of California Red Scale 355 

X-14: Concentration of DDT in a Lake Michigan 

Food Chain 356 

X-15: Comparison of Caucasian Dust Fall and 

the Soviet Economy 359 

X-16: PCB Residue in Fish, Birds, and Mammals .... 360 
X— 17: Petroleum Hydrocarbon Contamination in the 

Marine Environment 361 

X-18: Changes in Malaria Morbidity Before and 

After Mosquito Control 364 

X-19: Areas of Major Malaria Potential 366 

X-20: World Distribution of Schistosomiasis 368 

X-21 : Ejection of Small Droplets into the Atmosphere 

by Bursting Bubbles 370 


XI— 1: Distribution of the Yanomama Indians in 

South America 375 

XI— 2: Cytogenetic Findings in 49 Yanomama Indians 

from Two Villages in Venezuela 375 

XI-3: Frequency of Sickle-cell Gene in Liberia 377 

XI— 4: Changes in Oxygen Consumption Capacity of 

Lowlanders upon Upward Migration 380 

XI-5: Oxygen Consumption Capacity Among 

High-Altitude Natives 381 

XI-6: Growth Rate Differences Between Nunoa 

and U.S. Children 384 

XI-7: Possible Epidemiological and Pathophysiological 

Mechanisms Relating Carbon Monoxide and 

Myocardial Infarction 386 

XI-8: Rates of Chronic Bronchitis and Emphysema 

for Smokers and Non-Smokers 387 

XI-9: Hemodynamic and Respiratory Responses of 

Five Normal Subjects to Carboxyhemoglobin 

(COH,,) 389 







The natural environment of man 
consists of a single, gigantic system, 
all of whose parts continuously in- 
teract. It has been customary over 
the centuries to view certain of these 
parts in isolation: the atmosphere of 
winds and moisture; the hydrosphere 
of oceans, lakes, rivers, and ground- 
water; the biosphere of living things; 
and the lithosphere, or the crustal 
portion of the "solid" earth. Only 
during recent decades has a general 
awareness been developing that the 
behavior of each of these parts is 
fundamentally influenced, and indeed 
frequently controlled, by the behavior 
of the others. Even less apparent to 
many has been the role of the more 
remote parts of our environment: (a) 
the deeper reaches of the earth that 
lie beneath the crust, and (b) the vast 
region that extends from the upper 
levels of the atmosphere to the sun — 
and even beyond to the sources of 
much of the cosmic radiation that 
continues to bombard the earth. The 
latter is designated here as the "solar- 
terrestrial system" and forms the 
starting point of this review of the 
status of environmental science. The 
system can be divided conveniently 
into five parts: 

The Sun — an undistinguished star, 
but the principal source of the energy 
that drives our environmental system. 

The Interplanetary Medium — pre- 
viously considered a vacuum, this 
enormous region between the sun 
and the near-earth environment is 
now known to be filled with matter, 
largely electrons and protons (hy- 
drogen nuclei), originating in the 
outer reaches of the solar atmosphere 
(the "corona") and rushing outward 
at great speeds as the "solar wind." 

The Magnetosphere — that region 
of space in which the earth's magnetic 
field dominates charged-particle mo- 
tion. An enormous storehouse of 
solar energy, the magnetosphere is 

bounded, on the sunward side, by 
the "magnetopause," which is the in- 
ner boundary of a transition region 
(the "magnetosheath") beyond which 
lies the solar wind. In the direction 
away from the sun, the magneto- 
sphere stretches beyond the orbit of 
the moon in a long tail, like the tail 
of a comet. 

The lonospliere — a region con- 
taining free electrically charged par- 
ticles by means of which radio waves 
are transmitted great distances around 
the earth. Within the ionosphere are 
several regions, each of which con- 
tains one or more layers that vary 
in height and ionization depending 
on time of day, season, and the solar 

The Upper Atmosphere — an elec- 
trically neutral region, whose chief 
characteristics derive from the ab- 
sorption of solar ultraviolet radiation. 
The upper atmosphere (the "thermo- 
sphere" and "mesosphere") overlaps 
the lowest level of the ionosphere 
but also extends below it. Near 50 
kilometers from the earth, at the 
"stratopause," the upper atmosphere 
gives way to the atmosphere lay- 
ers that immediately surround the 
earth (the "stratosphere" and "tropo- 

This enormous volume of space is 
matched by the great range of physi- 
cal mechanisms that occur. In the 
closer regions of the upper atmos- 
phere, solar-terrestrial science is con- 
cerned with many of the concepts 
that meteorologists have evolved in 
dealing with the weather systems of 
the lower atmosphere; at the outer 
extremes, the methods of astrophysics 
and high-temperature plasma physics 
must be utilized. The status of solar- 
terrestrial science is thus strongly de- 
pendent on the specific phenomena 
being considered, for scientific prog- 
ress has not been uniform across this 
complex system. 

At the same time, the solar-terres- 
trial system, considered as a whole, 
is both the source of beneficial radia- 
tion, without which life itself could 
never have developed on the earth, 
and the mechanism for controlling 
harmful radiation. Without this con- 
trol mechanism, life could not long 
survive. The whole range of solar- 
terrestrial relationships is therefore 
of the greatest environmental con- 

The past twenty years have pro- 
duced a wealth of detail and at least 
partial understanding of the activity 
going on in this region. The knowl- 
edge has not produced quantitative 
models of the dynamical effects on 
the earth environment. The effects 
are too complicated — in the same 
way that weather is still too com- 
plicated for satisfactory quantitative 
modeling. But the knowledge informs 
us about what is happening, allowing 
us to understand the effects and to 
avoid them in some cases, thus per- 
mitting intelligent planning for the 

The Sun 

A powerful source of energy, gen- 
erated by thermonuclear processes, 
the sun can nevertheless be expected 
to remain in its present condition, 
emitting radiation at a more or less 
constant rate, for an extremely long 
time. This surmise is based on astro- 
nomical observations of stars similar 
to the sun. Scientific attention is 
therefore directed principally to as- 
pects of solar activity, and its attend- 
ant radiation, that are more variable 
in time. 

Most of the variability in solar 
radiation is associated with (a) the 
11-year solar-activity (or sunspot) 
cycle, (b) the "active regions" that 
are often displayed at the peak of 
the cycle and are the source of intense 


fluxes of extreme ultraviolet (EUV) 
radiation, X-rays, and energetic par- 
ticles (chiefly protons and electrons), 
and (c) the "solar flares" that burst 
forth from within these active re- 
gions. (See Figure 1-1) 

None of these three phenomena is 
well understood and the outstanding 

questions about the sun at present 

1. What is the basic reason for the 
11-year solar-activity cycle? 

2. What are the mechanisms un- 
derlying the emission of the 
more "exotic" portions of the 
spectrum — i.e., X-rays and 

Figure 1-1 — SOLAR FLARE 

Solar flares usually lasting only a few minutes form very rapidly in disturbed re- 
gions around sunspots. Flares occur quite frequently near the maximum of the solar 
activity cycle and are related to catastrophic changes in the powerful magnetic fields 
that are associated with sunspots. 

EUV radiation at the short 
wavelength end and radio 
waves at the long end? Is 
there anything that we should 
know about such relatively un- 
explored regions of the spec- 
trum as the infrared and mil- 
limeter-wave radiations? 

3. What is the basic mechanism 
responsible for solar flares? 

The Eleven-Year Solar-Activity 
Cycle — The basic mechanism that 
produces this cycle is not known. It 
is almost surely bound up with the 
internal structure of the sun, which 
will not be accessible to direct ob- 
servation for the foreseeable future. 
Thus, the answer to the first question 
is not likely to be reached with any 
degree of certainty for a considerable 
time, although theoretical mechanisms 
to explain solar activity should be 
generated and tested as far as pos- 
sible against observation. It remains 
the most basic of all outstanding 
questions of solar physics. 

Active Regions — There is more 
hope that a solution will ultimately 
be found to the problem of growth 
of individual active regions, as well 
as the occurrence of flares within 
these regions. It is known, for ex- 
ample, that magnetic fields play an 
important role in the associated en- 
hanced ultraviolet and X-ray emis- 
sions, in the growth of sunspots 
(around which magnetic fields at- 
tain strengths as great as 1,000 
gauss), and in the sudden birth of 
flares. Observations suggest that re- 
gions of strong magnetic field are 
carried outward by convection from 
the interior of the sun. When these 
magnetic fields break through the 
visible surface, we see their effect 
in the form of active regions and 
sunspots. As the magnetic fields ex- 
tend outward into the solar atmos- 
phere, they encounter less and less 
material. Flares originate at some 
location within this outer solar at- 

It is, thus, likely that answers to 
the second question posed above will 


come eventually from a gradual ex- 
tension of present work, in the form 
of refinement of satellite and space- 
probe observations and the continua- 
tion of the ground-based observations 
that have provided the core of our 
knowledge about the sun. 

Solar Flares — Solar flares are 
cataclysmic outbursts of radiation, 
similar to those generally observed 
from active regions, but in immensely 
greater quantities and with much 
higher energies. Fortunately, individ- 
ual flares are short-lived (of the order 
of an hour at most), and the most 
intense ones are quite rare, even at 
the peak of the solar cycle. 

The effects of flares on the near- 
earth environment make them by far 
the most important solar phenom- 
enon. The sudden surges of radiation 
they produce constitute a major haz- 
ard to manned space flights and a 
hazard of uncertain magnitude to 
the passengers and crew of super- 
sonic-transport aircraft on transpolar 
flights, where natural solar-radiation 
shields are less effective than else- 
where. Flares also increase the elec- 
trical conductivity of the lower part 
of the earth's ionosphere, giving rise 
to severe interruptions of radio and 
telegraph communications, particu- 
larly at high latitudes. 

Considerable progress in predict- 
ing major solar flares has been made 
through observations of time varia- 
tions of the magnetic field configura- 
tion within known active regions in 
the lower solar atmosphere. While 
improvements in empirical forecast- 
ing techniques of this kind can be 
expected, truly accurate predictions 
must await an understanding of the 
basic physical mechanisms respon- 
sible for the development of a flare. 
Many promising suggestions have 
been put forward, but none has proved 
entirely satisfactory. Some think a 
flare is caused by the annihilation of 
magnetic fields. Another interesting 
possibility has emerged from satellite 
probes of the "auroral substorms" 
that occur in the earth's outer mag- 

netosphere (see page 8). There is 
an apparent analogy between many 
of the observed radiation character- 
istics of these substorms and those 
of solar flares, opening up the pos- 
sibility that their mechanisms are 
basically similar, though with modifi- 
cations appropriate to the different 
solar environment. 

Other Research Needs — Solar 
EUV radiation is largely responsible 
for the existence of the earth's iono- 
sphere, and the broad nature of that 
responsibility is now fairly clear. 
Many of the details, however, re- 
main beyond our grasp. The de- 
tailed structure of the sun's radiation 
spectrum in the EUV and X-ray 
regions, the points of origin of these 
radiations at the sun, and the mech- 
anisms responsible for producing 
them are still areas of considerable 
uncertainty. Much progress is likely 
to come from the satellite and space- 
probe programs aimed at long-term 
monitoring of solar radiation in these 
wavelength regions with high an- 
gular resolution. 

The Interplanetary Medium 

The broad features of the inter- 
planetary medium are known and 
understood. (See Figure 1-2) Inter- 
planetary space is in fact tilled with 
material, or plasma, from the outer 
reaches of the solar atmosphere (the 
"corona"). It is made up for the 
most part of electrons and protons 
(hydrogen nuclei), with small quan- 
tities of helium and traces of heavier 
nuclei. As a result of the instability 
of the outer solar corona against 
expansion, this material is rushing 
outward from the sun at speeds of 
the order of 400 kilometers per sec- 
ond, forming the "solar wind." 

Many important details are still 
missing from this picture, however. 
For example, solar-wind matter is 
believed to constitute a sample of 
the material that exists in the upper 
solar corona. After a solar flare has 
erupted, however, the nuclear com- 

position of the solar wind has 
seen to change quite suddenly to one 
that contains up to 20 percent helium, 
with appreciable amounts of heavier 
elements. This material is probably 
that of the lower solar atmosphere, 
near the base of the corona or in the 
chromosphere, where flares originate. 
Spacecraft are providing an oppor- 
tunity to study fairly directly these 
interesting differences in the chemical 
composition of different regions of 
the sun itself. Comparison of solar- 
wind compositions with the terres- 
trial composition may produce in- 
sights into how the earth and solar 
system were formed. 

"Collisionless" Shock Waves — 
Another important area for study is 
the reason for the fluid-like behavior 
of the solar wind. In conventional 
fluids, particles interact by collisions, 
but collisions between individual 
solar-wind particles are extremely 
rare. Nevertheless, the solar wind 
displays many of the properties of 
a continuous fluid. In particular, the 
wind's outward expansion is super- 
sonic, in the sense that its speed 
relative to the sun and planets is 
greater than the speed with which 
waves can propagate through the 
medium. As it sweeps past any solid 
body in the solar system, the wind 
forms a standing shock wave up- 
stream of the body, analogous to the 
shock wave ahead of an aircraft in 
supersonic flight. The width of the 
wave that forms around the earth 
is determined by the outward extent 
of the earth's magnetic field, rather 
than by that of the solid earth itself, 
because the material in the solar 
wind, being a good electrical conduc- 
tor, is strongly affected by magnetic 
fields. The earth's shock wave is 
much larger than that formed around 
other, more weakly magnetized bod- 
ies like the moon, Venus, and Mars. 

Collisionless shock waves are phe- 
nomena that may have an important 
bearing on our understanding of the 
basic plasma physics that holds the 
key to controlled thermonuclear fu- 
sion. They have been difficult to 



In addition to visible radiation, both steady and sporadic electromagnetic emissions 
from the sun extend over a large range of wavelengths (radio to X-ray). Low energy 
charged particles in the expanding outer corona form the solar wind which, together 
with the extended solar magnetic field, dominates the environment of the interplane- 
tary medium. Occasionally, great flares in active regions emit charged particles of 
cosmic ray energy. 

produce under laboratory conditions, 
and their properties even harder to 
measure, because the probes used 
have generally been larger than the 
thickness of the shock wave itself. 
Now, however, the shock wave on 
the sunward side of the earth's mag- 
netosphere provides a natural labora- 
tory for studying collisionless shocks; 
space-probe techniques, in which the 
probe dimensions are much smaller 
than the shock thickness, are likely to 
produce a great deal of valuable 

The Interplanetary Magnetic 
Field — The solar-wind material is 
permeated by a weak magnetic field, 
also of solar origin. This interplan- 
etary magnetic field plays an impor- 
tant role in guiding the highly 
energetic flare particles toward or 
away from the earth. The detailed 
behavior of the field is exceedingly 
complex, however, and not well un- 
derstood. Furthermore, the picture is 
complicated by the often irregular, 
or "turbulent," structure of the mag- 
netic field, which causes particles to 

diffuse outward from the sun much 
as chimney smoke diffuses in the tur- 
bulent atmosphere. This turbulence 
is highly variable and depends on the 
general background of solar activity 
at any particular time. 

Since the effects of energetic par- 
ticles reaching the vicinity of the 
earth are generally undesirable, an 
ability to predict their arrival would 
be useful. One fact that helps in 
their prediction is that, because the 
sun rotates, interplanetary magnetic 
field lines stretch out in a spiral, 
much like water drops from a rotating 
garden sprinkler. Hence, the earth is 
connected magnetically to a point 
well to the western side of the sun's 
visible disc rather than to the center, 
and intense flares originating in the 
western portion of the disc are more 
likely to produce serious effects than 
those erupting in the eastern portion. 
Nevertheless, a great deal more work, 
both observational and experimental, 
is needed to lay the foundation for 
accurately predicting the arrival of 
potentially harmful particles. 

Blast Waves — Major solar flares 
are accompanied by blast waves 
which move out from the sun at 
speeds of the order of 1,000 kilo- 
meters a second, sweeping the am- 
bient solar-wind plasma ahead of 
them and bringing in their train a 
greatly enhanced solar-wind flow. 
The more intense blast waves are 
not appreciably affected by inter- 
planetary conditions. As the blast 
waves encounter the earth, they pro- 
duce major effects on the magneto- 
sphere, giving rise to worldwide 
magnetic storms and visible auroras 
(often at much lower latitudes than 
the conventional auroral zones). They 
also provide the most important 
sources of fresh material for the 
radiation belts that surround the 

Ability to predict these effects is 
a matter of some practical impor- 
tance, since serious interruptions in 
radio communications and even in 
domestic power supplies may result. 


Short-term prediction of blast waves 
is not too difficult, since the appear- 
ance of an intense flare on the sun 
gives one or two days advance warn- 
ing. Longer-term prediction is in- 
volved with the problem, discussed 
earlier, of long-term prediction of 
flares themselves; this problem re- 
mains unresolved due to our relative 
lack of understanding of the basic 
mechanisms underlying solar activity. 

In general, the major practical re- 
sult of increasing our knowledge of 
the interplanetary medium would be 
an improved ability to predict solar- 
flare particle effects in the vicinity 
of the earth. Basic advances in our 
understanding of the processes gov- 
erning collisionless plasmas, and of 
the origin of the solar system itself, 
are also likely consequences, and 
should be pursued. 

The Sunward Side — The mag- 
netopause marks the true boundary 
between the plasma originating at 
the sun and that belonging to the 
earth. On the sunward side of this 
boundary lies the immense shock 
wave described in the previous sec- 
tion, which stands some 15 earth- 
radii out from the center of the 
earth, as well as a region about 5 
earth-radii thick known as the "mag- 
netosheath"; the latter is made up 
of solar-wind plasma that has been 
disoriented by passage through the 
shock wave, together with tangled 
irregular magnetic field. 

The existence of something like 
the magnetopause had been predicted 
theoretically long before the Space 
Age; its existence has now been 

verified by satellites and space- 
probes carrying magnetometers. But 
many of its observed properties re- 
main puzzling. Furthermore, most 
of the observations have been con- 
fined to near-equatorial regions, while 
many of the important problems of 
energy transfer from the solar wind 
to the magnetosphere hinge on the 
existence and properties of the mag- 
netopause over the polar caps. Here 
practically no information exists. 

The Geomagnetic Tail — The con- 
figuration of the outer magnetosphere 
in the direction pointing away from 
the sun is quite different from that 
in the solar direction. Instead of 
being compressed by the solar wind 
into a volume sharply bounded by 
the magnetopause, the magneto- 


The Magnetosphere 

The magnetosphere (see Figure 1-3) 
is the outer region of the earth's 
ionized atmosphere, in which the 
medium is sufficiently rarified that 
collisions between charged and neu- 
tral particles can be neglected and 
the behavior of the charged particles 
is dominated by the earth's magnetic 
field. It can be regarded as the region 
in which control of the environment 
by the solar wind gives way to con- 
trol by the earth. As such, it is an 
enormous storehouse of solar energy, 
extending out to a distance of some 
10 earth-radii in the direction of the 
sun and to a much greater distance, 
perhaps as much as 1,000 earth-radii, 
in the opposite direction. 

The magnetosphere extends from 
the "magnetopause," where the geo- 
magnetic field terminates, down to 
a height of about 250 kilometers 
above the surface of the earth, and 
thus includes a large part of the con- 
ventional ionosphere. This section 
will be devoted to the outer regions 
of the magnetosphere proper; the 
inner portion will be treated as part 
of the ionosphere in the next section. 

This conceptual model of the earth's magnetic field is based on years of spacecraft 
observations. The dot marked "moon" indicates the relative distance at which the 
moon's orbit intersects the plane containing the sun-earth line and geomagnetic axis. 


sphere in the anti-solar direction is 
stretched out by the action of the 
solar wind into a long "tail," much 
like the tail of a comet. The geo- 
magnetic field lines are straight, with 
the field itself directed away from 
the earth (and the sun) in the south- 
ern half and toward the earth in the 
northern half. 

The geomagnetic tail is now rec- 
ognized to play a vitally important 
intermediate role as a reservoir of 
stored solar-wind energy. Its for- 
mation requires some form of energy 
transport across the boundary be- 
tween the magnetosphere and the 
solar wind, but whether this transport 
is accomplished by a process analog- 
ous to viscosity in a fluid, or by the 
coupling together of geomagnetic and 
interplanetary magnetic fields, or by 
some more exotic process is not yet 

Equally mysterious are the pro- 
cesses by which the tail releases en- 
ergy. While some of the enormous 
energy stored in the tail is continually 
being drained into the earth's at- 
mosphere, the most dramatic releases 
are associated with relatively short 
bursts, known as auroral substorms, 
which can recur at intervals of a 
few hours. They are accompanied 
by disruptions of radio communica- 
tions and surges on long power lines 
that can result in power outages. 
Associated increases in radiation-belt 
particle fluxes shorten the lives of 
communication satellites by degrad- 
ing the performance of the solar cells 
on which their power supply depends. 

As noted earlier, the substorms are 
thought to have many analogies to 
solar flares. An understanding of 
their mechanisms may thus lead to 
an understanding of the flare mech- 
anism. This understanding is vital 
to our future ability to predict the 
whole gamut of solar-terrestrial phe- 
nomena that affect communications 
and power supplies and may also 
provide some insight into the plasma- 
confinement mechanisms that are 
needed to achieve controlled thermo- 

nuclear fusion. Fortunately, the sub- 
storm mechanism can be studied di- 
rectly through satellite probes of the 
tail region in which the release of 
energy takes place. 

Radiation Belts — The great en- 
ergy released in the form of an 
auroral substorm also serves to re- 
plenish the radiation belts that sur- 
round the earth with magnetically 
trapped particles. The discovery of 
these belts was the first dramatic 
result of the Space Age in terms of 
exploration of our near-space envi- 
ronment. A broad mapping of their 
structure and behavior has now been 
obtained, although no complete ex- 
planation yet exists of the sources 
of the belts or of their dynamic 
behavior. At first the belts were 
thought to be fairly static and well- 
behaved. Nature seemed to have pre- 
sented us with an example of stably 
confined high-temperature plasma. 
It is now clear, however, that the 
individual particles in the outer por- 
tions of the belts are continuously 
experiencing a variety of processes, 
including convection in space, accel- 
eration, and precipitation into the 
atmosphere. Plasma instabilities of 
some kind associated with the growth 
of hydromagnetic and electromag- 
netic waves in the magnetosphere 
seem to be of major importance. 
Similar instabilities have prevented 
the confinement of high-temperature 
plasmas in the laboratory. 

The Plasma-pause — In addition to 
confining the magnetosphere to a 
sharply bounded cavity on the sun- 
ward side, and stretching it out into 
a long tail in the anti-solar direction, 
the solar wind apparently generates 
a vast system of convection that 
affects the plasma throughout the 
outer magnetosphere. This convec- 
tion system pulls plasma from the 
sunward side of the magnetosphere 
over the top of the polar caps into 
the tail, where a return flow carries 
it back toward the earth, around the 
sides, and back out to the front of the 

Another of the great boundary 
surfaces of the magnetosphere, known 
as the "plasmapause," marks the di- 
viding line between plasma that is 
influenced by this convection and 
plasma that is tightly bound to the 
earth and corotates with it. The 
plasmapause generally lies some 4 
earth-radii out from the center of 
the earth above the equator, and 
follows the shape of the geomagnetic 
field lines from there to meet the 
ionosphere at about 60° magnetic 
latitude. In common with other mag- 
netospheric boundaries, such as the 
magnetopause, it is extremely well 
marked, and the properties of the 
magnetospheric plasma change 
abruptly in crossing it. 

Although the close relationship be- 
tween the plasmapause and the bound- 
ary of the convection region has been 
fairly well established, several fea- 
tures of the plasmapause remain un- 
explained. These include the sharp- 
ness of the plasma changes on either 
side, the shape of the plasmapause 
at any given instant, and its radial 
motions in time. There is some 
evidence that inward movements 
of the plasmapause during magnetic 
storms have a bearing on the so- 
called ionospheric storms, when the 
density of the mid-latitude iono- 
sphere drops sharply, leading to a 
deterioration in radio communication. 
Experiments aimed at probing the 
plasmapause are presently being 
planned with the aim of improving 
our understanding of the mechanisms 
influencing its formation and its dy- 
namic behavior. 

The Ionosphere 

The ionosphere (see Figure 1-4) is 
defined here as the electrically charged 
component of the earth's upper at- 
mosphere, consisting of free elec- 
trons, heavy positively charged ions, 
and a relatively small number of 
heavy negatively charged ions. The 
non-charged component — i.e., the 
atmosphere itself — will be consid- 
ered in the next section. 


Figure 1-4 — THE IONOSPHERE 

Geometric Altitude (in Kilometers) 



10 2 10 3 10" 

Number of Electrons per cubic centimeter 

The unfiltered ultraviolet and X-rays of the sun ionize many molecules, producing the 
ionosphere. The ionosphere has several layers, each characterized by a more or less 
regular maximum in electron density. The difference between the day and night 
profile is due to the availability of solar radiation. 

Our understanding of the forma- 
tion and behavior of the ionosphere 
is considerably more advanced than 
in the areas discussed previously. 
Major breakthroughs have been 
made, particularly in the past two 
decades, when direct probing through 
rockets and satellites has been pos- 
sible. Nevertheless, as is usually the 
case, increasing knowledge has raised 
new and previously unsuspected 
questions, some of which have con- 
siderable practical importance. 

The ionosphere is conventionally 
divided into three fairly distinct re- 

1. The D region, lying between 
about 60 and 95 kilometers al- 

2. The E region, extending from 
95 to about 140 kilometers; and 

3. The F region, containing the 
bulk of the ionization and ex- 
tending upward from 140 kilo- 

The E and F regions are capable of 
reflecting medium- and short-wave 
radio waves and thus permit long- 
distance communication. The D re- 
gion plays an important role in 

propagating long waves, but it has 
an undesirable effect on radio propa- 
gation at the higher frequencies 
through absorption of the radio-wave 

The F Region — In the case of the 
F region, where the concentrations 
of free electrons reach their peak, 
ionization is now known to be created 
by EUV radiation from the sun. The 
contributions of the various portions 
of the solar spectrum within this 
band are quite well understood. The 
principal unknowns arise basically 
from the fact that the atmosphere 
at these altitudes is so rarified that 
collisions between electrons, ions, 
and neutral particles are extremely 
rare, so that an individual electron 
has a very long lifetime and can move 
considerable distances from the re- 
gion in which it is formed. As a 
result, the electron concentration at 
any given time and place is strongly 
influenced by motions, including 
winds, atmospheric waves, and dif- 
fusion. Many of the anomalies in 
the behavior of the F region, which 
have been recognized since the early 
days of radio propagation, are almost 
certainly based on motions of this 

Much of the current interest in the 
F region is focused on the explanation 
of these anomalies and the informa- 
tion they can provide on the winds 
of the outer atmosphere. One out- 
standing anomaly is that the daytime 
F region is usually denser in winter 
than in summer, despite the decreased 
sunlight available. Another is that 
the F region tends to be maintained 
throughout the long polar night when 
the sun disappears completely for 
long periods of time. This latter 
phenomenon seems at least partly 
due to bombardment of low-energy 
particles from the outer magneto- 
sphere, but movement of electrons 
from lower latitudes probably also 
plays a role. 

The most powerful tool to emerge 
in recent years for studying the F 
region is the so-called incoherent 


(Thomson) scatter radar technique, 
which allows direct ground-based 
investigation of high-lying ionization. 
This has added immensely to our 
knowledge of the mechanisms in- 
fluencing the F region. It has been 
possible to measure not only electron 
concentration, but also temperatures 
of the different plasma components 
and effects of electric fields in causing 
motion of the plasma. Results from 
existing and planned scatter radar 
installations will undoubtedly add a 
great deal to our knowledge of this 
outermost region of our atmosphere, 
and will, hopefully, lead to improve- 
ments in our ability to predict 
changes in radio propagation condi- 

Two F-region phenomena are of 
special interest at this time. These 
are the "polar wind" and the "iono- 
spheric storm," both of which are 
more than mere curiosities. The 
polar wind, which has been predicted 
on theoretical grounds but not yet 
adequately verified observationally, 
arises because of the existence of the 
long geomagnetic tail described in 
the previous section. The F-region 
plasma can diffuse quite freely along 
the direction of the earth's magnetic 
field, and as long as the field lines 
loop back into the opposite hemi- 
sphere of the earth no plasma is 
lost thereby. In the polar regions, 
however, the field lines are greatly 
stretched by the solar wind and 
eventually become lost in interplan- 
etary space. When F-region plasma 
travels out along these field lines, it 
ultimately disappears. This outward 
flow is expected to be at least partly 
supersonic; it plays a large part in 
the loss of the lighter constituents 
(hydrogen and helium) from the at- 

Ionospheric storms, by contrast, 
have been recognized observation- 
ally for many years but still have 
no adequate theoretical explanation. 
Over most of the earth they appear 
as a rather rapid decrease in the 
electron concentration of the F region, 
accompanied by a corresponding in- 

ability to propagate radio signals that 
normally propagate freely. Recovery 
from this effect is much less rapid 
than its onset. The ultimate explana- 
tion for ionospheric storms may lie 
in a combination of inward motion 
of the plasmapause, discussed in the 
last section, movement of the F region 
caused by electric fields, and changes 
in the photochemistry of the region. 

The £ Region — As one moves 
downward from the peak of the F re- 
gion, photochemistry becomes stead- 
ily more important relative to motions 
in determining the characteristics of 
the ionosphere. The E region, which 
is largely formed by solar X-radiation 
together with some EUV radiation, 
shows quite different characteristics 
from the F region, and many of these 
differences arise from photochemical 
causes. Movements of the ionization 
are still important, however, in that 
they give rise to very substantial 
electric fields and currents because 
of the difference between the colli- 
sion characteristics of electrons and 
ions. In fact, the whole situation is 
analogous to a dynamo, in which an 
electrical conductor moves in a fixed 
magnetic field and thereby generates 
an electric field. For this reason, the 
region is often referred to as the 
"dynamo region" of the ionosphere. 

The E region is the seat of the 
major current systems responsible for 
surface magnetic-field variations. The 
latter are particularly pronounced 
near the magnetic equator, where the 
magnetic field lines are horizontal, 
and in auroral zones, where irregular 
changes in ionospheric conductivity 
are associated with particle bombard- 
ment. The great concentrations of 
ionospheric current in these regions 
are known respectively as the "equa- 
torial electrojet" and the "auroral 
electrojet" by analogy with the jet 
streams of the lower atmosphere. 
While the broad reasons for the 
existence of these electrojets are 
fairly well understood, they still pre- 
sent many puzzling features. The 
growth of small but intense irreg- 
ularities within and near the electro- 

jets, in particular, presents a chal- 
lenge in geophysical plasma physics 
that has not yet been fully met. 

The development of the thin, dense 
layers of electrons known collectively 
as "sporadic E," once an outstanding 
problem, now appears to be largely 
explicable in terms of the interaction 
of vertical wind shears with metallic 
ions of meteoric origin. This problem 
is not completely solved, however, 
and is still an active field for theory 
and experiment. The continuous in- 
flux of meteoric material to the at- 
mosphere has turned out to be quite 
important to both the E and D re- 
gions of the ionosphere. There are 
many unanswered questions con- 
nected with this meteoric material, 
including its chemical composition, 
its distribution within the atmos- 
phere, and its ultimate fate. 

The D Region — After years of 
relative neglect, a great deal of inter- 
est is presently focused on the D 
region, which is the real meeting 
ground between the lower and upper 
atmosphere. It now appears certain 
that many of the strange facets of 
this region's behavior are basically 
due to meteorological effects con- 
nected in as yet unknown ways with 
the lower atmosphere. Thus, on cer- 
tain winter days the D-region elec- 
tron concentration rises abnormally; 
this "winter anomaly" is associated 
with sudden warmings of the strato- 
sphere and mesosphere which are 
probably connected with the break- 
down of the polar winter vortex 
of the general atmospheric circula- 
tion. D-region electron concentra- 
tions usually display a high degree 
of variability during winter, while 
they are relatively stable from day 
to day in summer. These effects, 
and others of a similar kind, are 
currently arousing a great deal of 
interest both among meteorologists, 
who are extending their concepts 
upward into this unexplored region 
of the atmosphere, and among iono- 
spheric workers, who are bringing 
their interests downward. 



The practical importance of the 
D region arises from the fact that it 
efficiently absorbs radio waves at the 
higher frequencies (MF and HF), and 
reflects them at the lower frequencies 
(LF and VLF). Both of these proper- 
ties are greatly modified by solar dis- 
turbances, since the energetic radia- 
tion and particles emitted from the 
sun at these times penetrate through 
the thin upper regions of the iono- 
sphere and deposit most of their 
energy in the D region. When in- 
tense solar flares give rise to fluxes 
of energetic solar protons, for ex- 
ample, the protons are funnelled by 
the earth's magnetic field to the 
polar regions, where they enter the 
atmosphere and create very intense 
ionization in the 50- to 100-kilometer 
altitude range. The consequent strong 
absorption of HF radio waves (known 
as "polar cap absorption") com- 
pletely disrupts short-wave radio 
communication over the polar regions, 
sometimes for days on end. The X- 
rays emitted from the same flares 
cause mild, brief fadeouts that extend 
over the sunlit hemisphere of the 

While the problem of predicting 
these effects is ultimately the prob- 
lem of predicting intense solar flares, 
it is also important to learn as much 
as possible about the relationship 
between the detailed characteristics 
of individual flares and the nature 
and magnitude of the ionospheric 
response. A great deal has already 
been achieved in this area through a 
combination of ground-based radio 
observations and direct rocket and 
satellite measurements of the radia- 
tions and particles responsible. 

Ionospheric Modification — One in- 
teresting recent development is the 
possibility of artificial modification of 
the ionosphere. Research on this 
problem is still in its infancy, but 
success could lead to a major increase 
in our ability to use the ionosphere 
for radio-propagation purposes. Un- 
controlled modification has been pro- 
duced artificially by high-altitude 
nuclear detonations; attempts are 

now being made to modify the iono- 
sphere in more sophisticated ways 
by releasing ion clouds from rockets 
and by use of high-power radars on 
the ground. This approach is likely 
to lead eventually to greater insight 
into the mechanisms that control 
the natural ionosphere as well as 
provide us with a new range of pos- 
sible practical uses. 

The Upper Atmosphere 

This section deals with the neutral 
gas of the upper atmosphere, as dis- 
tinct from the electrically charged 
component that forms the iono- 
sphere. In terms of altitude, the two 
overlap; indeed, they are closely 
coupled together in many ways, so 
that several of the problems men- 
tioned in the preceding section are 
inseparable from the problems of 
the neutral upper atmosphere. The 
neutral upper atmosphere also shows 
a range of properties not directly 
related to the ionosphere, however, 
and those are the questions of con- 
cern here. 

Like the ionosphere, the neutral 
atmosphere has long been divided 
into altitude regions, based mainly on 
thermal structure. (See Figure 1-5) 
The very lowest region of the atmos- 
phere, in which the earth's weather 
systems are located, is known as the 
troposphere; here the temperature 
generally decreases with increasing 
altitude. Above the tropopause the 
temperature first remains constant 
and then increases with increasing 
altitude through the stratosphere, 
terminating at a temperature maxi- 
mum near 50 kilometers altitude 
known as the stratopause. Above 
this lies the mesosphere, a region of 
decreasing temperature with height, 
which extends to about 85 kilometers. 
The temperature at the mesopause is 
lower than anywhere else in the at- 
mosphere, and can be below — 150° 
centigrade. Above the mesopause 
lies the thermosphere, in which the 
temperature steadily increases with 
altitude, eventually reaching a fairly 

steady value in excess of 1,000° cen- 
tigrade. The warm regions of the 
upper atmosphere owe their high 
temperatures to the absorption of 
solar ultraviolet radiation, by ozone 
near the stratopause and by EUV 
radiation in the thermosphere. 

Tlic Thermosphere — The intense 
heating experienced by the thermo- 
sphere must set up some kind of 
circulation pattern, analogous to the 
circulation of the lower atmosphere 
but differing in many important re- 
spects because of the extreme rarity 
of the medium and the influence of 
the ionosphere. Little is known about 
this circulation, but the effects of the 
variable heat input on the density 
of the thermosphere can be directly 
detected through changes in the or- 
bital period of satellites that travel 
through the upper thermosphere. As 
solar activity increases, the thermo- 
sphere heats up, expands outward, 
and increases the frictional drag on 
satellites, thereby appreciably short- 
ening their lifetimes. 

Thermospheric heating depends on 
the structure of the sun's EUV spec- 
trum and its variability with solar 
activity, neither of which is known 
adequately, and on the constitution 
of the upper atmosphere and the 
manner in which the various atoms 
and molecules absorb the radiation. 
The non-uniformity of the heating 
from equator to poles causes strong 
temperature gradients which in turn 
give rise to very strong winds. Some 
of the properties of these thermo- 
spheric winds have been inferred 
from their influence on the F region 
of the ionosphere, which is amenable 
to exploration by ground-based ra- 
dio sounding, but this information is 
still very sparse. 

The principal chemical components 
of the thermosphere are atomic ox- 
ygen, helium, and hydrogen; the two 
latter, being the lightest constituents 
of the atmosphere, tend to diffuse 
toward the higher regions; atomic 
hydrogen, in particular, is so light 
that appreciable numbers of atoms 





100 ► -^^^ EXCEEDING 1000° C. 




90 ► 


< 10-3 

4 300 

80 ► 


4 10-2 

4 250 

70 ► 



PHERE ^*^^ 

<— 10- 

60 ► 


50 ► 


* l 


40 ► 

30 ► 




20 ► 

4 ioo 


10 ► . 


4 200 

4 300 


4 500 

1 1 

1 1 1 1 1 1 1 1****^ 1 

4 700 

4 1000 

-90 -80 -70 -60 -50 -40 -30 -20 -10 10 20 


This chart shows the average distribution of temperature with height (lapse rate) of 
the mid-latitude atmosphere. Lines ending with the word "pause" indicates the 
boundary between two spheres. These boundaries are not always well established. 

can attain escape velocity and leave 
the earth entirely. The region in 
which an atom can proceed outward 
without colliding with other atoms is 

known as the exosphere, the true 
outer limit of the neutral atmosphere. 
The exosphere's presence can be de- 
tected from the ground because hy- 

drogen scatters sunlight at night, but 
its properties have been little ex- 

The Mesosphere — This region, 
which overlaps the D region of the 
ionosphere, is the object of much 
current interest. It is a region of 
extreme complexity, in which mete- 
orological phenomena, mixing, and 
photochemistry all play a part. It 
is made up mostly of molecular ni- 
trogen and oxygen, just like the lower 
atmosphere, but it contains many 
minor constituents which, because of 
their chemical reactivity or ready 
ionizability, dominate the energies of 
the region. Among these are ozone, 
atomic oxygen, nitric oxide, water 
vapor, and many others. The region 
has proved extremely difficult to ex- 
plore directly, since the atmosphere 
is too dense to allow satellites to 
remain long in orbit and too high 
for the balloon techniques that are 
used at lower altitudes. Most existing 
information has come from rocket 
soundings, but even here the prob- 
lems are severe because of the com- 
paratively high density and the fact 
that rockets generally travel through 
the region at supersonic speeds, cre- 
ating shock waves that disturb con- 
ditions locally. 

The photochemistry of the region 
has recently been under intensive 
study, both through rocket experi- 
ments and by way of laboratory 
measurements of the rates of the 
various key chemical reactions. A 
broad picture of the important mech- 
anisms is beginning to emerge, but 
the roles played by transport and 
movements in carrying constituents 
from one point to another are still 
largely unexplored. Of special im- 
portance is the question of turbulence 
in the mesosphere and its influence 
on mixing of the various constituents. 
Many of the problems of dispersing 
pollutants in the lower atmosphere 
arise from mechanisms similar to 
those of distributing minor constitu- 
ents in the mesosphere; many of the 
photochemical reactions responsible 
for smog formation are also the same. 
Thus, the work presently being car- 



ried out in the relatively less compli- 
cated mesosphere may produce sig- 
nificant insights into these practical 
problems of the lower atmosphere. 

Research Needs in the Upper At- 
mosphere — The greatest single need 
in this area of geophysics is for a 
systematic exploration of the prop- 
erties of the upper atmosphere using 
rocket and satellite techniques. At 
present we have only tantalizing 

glimpses of many of the important 
features, and little or no information 
on how they change with time of 
day, season, solar activity, and al- 
titude. The techniques exist, and 
all that is required is a sustained 
synoptic program aimed at studying 
a variety of upper-atmosphere param- 
eters simultaneously under a wide 
range of conditions. Such a program 
would add immensely to our knowl- 
edge of the upper reaches of the 

atmosphere, and of the mechanisms 
occurring there that may be important 
to our existence. Because of the 
complexity of the region, single prob- 
lems cannot be handled in isolation, 
and there is a real need for a thor- 
ough exploration of the entire region. 
Some scientists believe, however, that 
we now have enough general knowl- 
edge of what goes on in space that 
future studies should be limited and 
carefully aimed at specific goals. 


The sun and the motions of the 
of the earth about it essentially de- 
termine the earth's climate. The time 
of day and the season are associated 
with well-known, normal variations 
in the weather. Superimposed on 
these regular patterns, however, are 
extremely large deviations from cli- 
matology. Some of these can be ex- 
plained (and, hence, forecast with 
some success) on the basis of physical 
equations; some are so irregular or 
little understood as to require a 
statistical and probabilistic approach 
to prediction. 

Advances in Forecasting Technique 

For many decades, atmospheric sci- 
entists attempted to relate solar 
perturbations to terrestrial weather 
features, with no significant success. 
Until recently, the only data avail- 
able to them were those collected 
from ground-based observatories and 
weather stations. When radio arrived 
on the scene, scientists began to re- 
late variations in radio propagation 
to observed changes in the character 
of the sun. 

The Space Age produced a revolu- 
tion in understanding and procedure. 
It became clear that, in general, the 
farther one moves away from the 
troposphere, the more one's environ- 
ment is influenced by solar perturba- 
tions. In the region above the meso- 

pause, at about 80 kilometers from 
the earth, variations in temperature 
and density result almost entirely 
from irregular solar emissions and 
hardly at all from the moving pattern 
of low-level cyclones and anticyclones. 

These new insights — together with 
the realization that men, equipment, 
and their activities above the lower, 
protective atmosphere are vulnerable 
to (and may benefit from) environ- 
mental changes — gave impetus to a 
rush of new, very-high-altitude scien- 
tific missions and related activities. 
These include observations from rock- 
ets, satellites, and improved ground- 
based platforms; computerized data- 
processing techniques; and prediction. 

Solar Forecasting Services — At- 
mospheric scientists, ionospheric and 
solar physicists, and even astrono- 
mers have shared in these new activi- 
ties. But the atmospheric scientist, in 
becoming involved with the expanded 
environment, brings with him a spe- 
cial point of view: he is vitally con- 
cerned with data standardization, real- 
time use and rapid transmission of 
data, and the tailoring of his products 
to operational needs. He brings added 
emphasis with regard to synoptic cov- 
erage. He uses meteorological tech- 
niques in studying high-altitude vari- 
ations such as anomalous variations 
in neutral density. He even applies 
Rossby's concepts to circulation fea- 
tures on the "surface" of the sun. 

A combination of the viewpoints 
and methods of various kinds of sci- 
entists has now brought a new and 
important scientific service into being 
— the solar forecast center. The first 
such center was established by the 
U.S. Air Force with a nucleus of 
highly trained and cross-disciplined 
scientists from the Air Weather Serv- 
ice and the Air Force Cambridge Re- 
search Laboratories; their mission was 
to provide tailored, real-time support 
to military operations affected by 
the environment above the "classical 
atmosphere." The ionospheric-pre- 
diction activity of the National Oce- 
anic and Atmospheric Administration 
(NOAA) has been enlarged to pro- 
vide a complementary service for the 
civilian community. 

Major Problem Areas 

Solar forecasting centers and other 
such forecasting services undertake 
to meet the needs of a variety of cus- 
tomers, including radio communica- 
tors, astronauts, and scientific re- 
searchers. The most important areas 
of interest for such customers are as 

Tlie Ionosphere — High frequency 
(HF) (3-30 mHz) radio communica- 
tions are widely used as an inexpen- 
sive, fairly reliable means of trans- 
mitting signals over long distances. 



The HF radio communicator therefore 
requires long-range and short-term 
forecasts of the specific frequencies 
that will effectively propagate 
throughout the day. This is known 
as frequency management and means, 
in short, the determination of the 
frequency that can be used from a 
particular transmitter to a particular 
receiver at a particular time. Propa- 
gation of the HF signal to a distant 
receiver employs single or multiple 
"reflections" from the ionosphere and 
the earth. Since the state of the iono- 
sphere is dynamic and highly respon- 
sive to solar activity, the number of 
usable frequencies depends on (a) the 
intensity of ionizing solar ultraviolet 
and X-ray emissions and (b) the de- 
gree of disturbance of the magneto- 
spheric-ionospheric environment. 
These are in addition to such factors 
as time of day, latitude, and equip- 
ment characteristics. 

The HF communicator also requires 
forecasts and real-time advisories of 
short-wave fadeouts caused by X-ray 
emissions related to solar flares. If he 
gets these, he can insure that alternate 
means of communication (satellite or 
microwave methods) are available for 
use in sending the highest-priority 
messages. He can also differentiate 
between communication outages 
caused by propagation and those 
caused by equipment malfunction. If 
he knows that an outage is due to 
a short-wave fadeout, the communi- 
cator can simply wait for his circuit 
to return to normal to continue 
low-priority traffic rather than take 
time-consuming action to switch fre- 

Other solar-terrestrial disturbances 
which disrupt communications, such 
as "polar cap absorption" events, 
geomagnetic-ionospheric storms, and 
auroral and geomagnetic substorm 
events, must also be forecast to allow 
the communicator to prepare to use 
alternate means of communication. 

Finally, the communicator needs an 
accurate and complete history of ion- 

ospheric disturbances to post-analyze 
his system's performance. Outages 
that have been attributed to poor 
propagation when no disturbances 
were observed can then be identified 
as being due to mechanical or pro- 
cedural problems. 

High-Altitude Density — Space 
vehicles which spend all or part of 
their orbits in the region from 100 to 
1,000 kilometers above ground are 
subject to significant drag from the 
neutral atmosphere. The density of 
this region and the resulting satellite 
drag are dynamic parameters. Their 
variations reflect heating of the high 
atmosphere produced by solar ultra- 
violet variations and corpuscular pre- 
cipitations, mostly at polar latitudes. 

Satellite drag perturbs the orbital 
parameters of the vehicles and, in 
turn, complicates cataloguing, track- 
ing, and control. Density variations 
can sometimes alter the orbit enough 
to carry the vehicle out of an area of 
scientific interest or otherwise de- 
grade its mission. If mission con- 
trollers are to be able to compensate 
adequately for orbital changes, they 
need the following: 

1. A dynamic, accurate model of 
the global distribution of at- 
mospheric density throughout 
the region of interest; 

2. Accurate observations of such 
parameters of the model as 
ultraviolet flux, solar-wind en- 
ergy, and density; and 

3. Accurate forecasts of these pa- 

Space Radiation — Man in space 
faces radiation hazards from galactic 
cosmic rays, trapped radiation, and 
storms of particles (mostly protons) 
from solar flares. 

Cosmic radiation is so penetrating 
that there is no practical means of 
shielding against it. Astronauts sim- 
ply must live with it. Its intensity is 

low enough that it does not pose a 
serious hazard. 

The trapped-radiation environment 
of near-earth space, however, is so in- 
tense that prolonged exposure would 
be fatal. Consequently, mission plan- 
ners avoid that region by orbiting 
below it or arranging to pass through 
it quickly. 

Solar-flare radiation poses a threat 
for a lightly shielded astronaut. The 
threat is not especially significant, 
however, because (a) major events 
occur rarely, (b) the astronaut can be 
shielded effectively from most of the 
radiation (in effect, the Apollo com- 
mand module is a "storm cellar"), 
and (c) the astronaut can return to 
the safety of a shielded vehicle before 
significant doses have time to build 

Despite the rather low critical na- 
ture of this hazard, certain space- 
environment support is essential to 
protect man effectively from the haz- 
ards of solar-flare radiation. Mission 
planners need forecasts of the likeli- 
hood of a particle event to insure that 
they have enough options available 
in case an event occurs. Observations 
of the flare radiation are needed to 
alert the astronauts. Techniques are 
required to project the course and 
intensity of an observed event so that 
the radiation threat can be accurately 

Today there is some concern over 
the radiation hazard to passengers 
and crew of supersonic transports, 
especially for polar flights. Though 
not completely resolved, it appears 
that the threat is minimal, since solar 
cosmic-ray events sufficiently intense 
to cause undesirably high radiation 
doses are exceedingly rare and prob- 
ably occur less than once every ten 
years. But forecasts, observations, 
and alerts will be needed to insure 
full protection. Warning systems are 
being developed, but warnings are 
unlikely to reach aircraft already in 
polar regions unless communication 



satellites can be used that are not 
subject to the "polar blackout" that 
accompanies any biologically danger- 
ous particle flux. 

Electromagnetic radiation from so- 
lar flares can be observed by sensitive 
radio receivers in the form of radio 
"noises," or interference, if the sun 
happens to be in the direction that 
the antenna is "looking." Observa- 
tions of the sun's radio emission are 
required to advise system operators 
of the nature of the signal they are 

General Observational Data — The 
researcher needs forecasts and real- 
time advisories of the occurrence of 
selected solar and geophysical events 
in order to schedule and conduct ex- 
periments. He needs a consistent base 
of comparable observational data that 
can be vigorously examined for sig- 
nificant relationships. 

The State of the Art 

To meet the needs of these various 
operational and research communi- 
ties, varied capabilities, skills, and 
understanding are required. Individ- 
ually or institutionally, the atmos- 
pheric scientist and his colleagues 
must provide the following: 

1. Observations of the sun and 
the space environment; 

2. Rapid communications and data 

3. Forecasts of significant solar 
activity and geophysical re- 

In addition, they must have an un- 
derstanding of the needs of specific 
systems and operations, in order to 
present advice to an operator in the 
form that will benefit him most. 

Observations of the Sun and the 
Space Environment — The observa- 
tions must be continuous, consistent, 

comparable, and, where appropriate, 
synoptic. They should include, but 
not be limited to, solar flares, active- 
region parameters, solar radio emis- 
sion, space radiation, solar wind, the 
ionosphere, and the geomagnetic field. 

U.S. civilian and military agencies 
maintain a network of operational 
solar observatories around the globe. 
This network is supplemented by nu- 
merous scientific observatories. 
Nearly continuous patrol of solar 
chromospheric activity has been 
achieved thereby. But the data ob- 
tained are not as useful in operational 
situations as, ideally, they might be. 

First, they are subject to consider- 
able inconsistency due to the subjec- 
tive evaluations of the individual 
observers. To obtain the final de- 
scription of a solar event, many often 
highly divergent observations are sta- 
tistically combined. But in the quasi- 
real-time frame of operational sup- 
port, evaluation of a solar event must 
be made on the basis of only one or 
two observations. 

Second, patrol of the sun's radio 
emission is not complete. Gaps in 
synoptic coverage exist, frequencies 
useful for diagnosing solar activity 
are not always available, and some ob- 
servatories report uncalibrated data. 
Operational radio patrol is about 90 
percent effective, nonetheless. 

Unmanned satellites are patrolling 
energetic-particle emission and some 
other space parameters for opera- 
tional use. Real-time energetic-par- 
ticle patrol presently exceeds 20 hours 
a day; X-rays, 16 to 18 hours; and 
solar wind, 8 to 9 hours. The obser- 
vations are limited, however, in that: 
(a) they are not continuous; (b) data 
acquisition and processing are expen- 
sive; (c) all needed parameters are 
not sampled; (d) different sensors are 
not intercomparable; (e) sensor re- 
sponse changes; and (f) the vehicles 
have limited lifetimes. Other scien- 
tific satellites are sampling the space 
environment, but limited readout and 
data-processing capabilities and ex- 

perimenters' proprietary rights pre- 
vent these data from being used 

Observations of the ionosphere are 
being made using vertical- and 
oblique-incidence ionosondes, riome- 
ters, and sudden-ionospheric-disturb- 
ance sensors. For operational use, 
however, timely receipt of data is 
available from only about 20 loca- 
tions around the world. 

Several other observations of solar 
and geophysical parameters are being 
made for operational use. These in- 
clude radio maps of the sun, ground- 
based neutron monitors, and geomag- 
netic-field observations. In general, 
they suffer from the same limitations 
as the observational networks de- 
scribed earlier. 

The recent establishment of World 
Data Centers for storing and ex- 
changing space data represents a sig- 
nificant advance. These centers are 
supported by the Inter-Union Com- 
mission on Solar-Terrestrial Physics 
of the International Council of Scien- 
tific Unions. However, the primary 
benefit comes to the research com- 
munity rather than directly to the 
operational community. Furthermore, 
the program still suffers from incon- 
sistencies, incomparabilities, and in- 
completeness of much of the data. 

Rapid Communications and Data 
Processing — Rapid communication 
and processing of data are essential 
for timely forecasts. Even in the ab- 
sence of forecast capability, they are 
required to make maximum opera- 
tional use of observations. 

The Air Force has designated a 
special teletype circuit for the rapid 
movement and exchange of solar- 
physical data within the United 
States. Both civil and military agen- 
cies have access to it. This circuit 
makes possible near-real-time relay. 
Data from the overseas observatories 
must be relayed by more complex and 
time-consuming means. 



Up to a year or two ago, processing 
of the data was done by hand. Sys- 
tems to process the data by machine 
have now begun to come into use, 
and the future will see more and more 
use of computers in operational space- 
environment support. 

Forecasts of Significant Solar Ac- 
tivity and Geophysical Responses ■ — 
Geophysically significant solar events 
must be forecast several hours, days, 
weeks, or even years in advance. Sig- 
nificant factors of the earth's environ- 
ment, such as density at satellite alti- 
tudes and the state of the ionosphere, 
must also be forecast. In general, the 
shorter the forecast period, the more 
stringent the accuracy requirement. 

Research on forecasting techniques 
has been under way for many years. 
The approaches have been many and 
varied, and no single technique has 
yet stood up under the test of con- 
tinued operational use. Since knowl- 
edge of the physics of solar processes 
is lacking, present techniques are 
based on statistical correlations and 
relationships, observed solar features, 
even the influence of planetary con- 
figurations. By a combination of 
many techniques and subjective skills, 
operational forecasters have now de- 
veloped a limited ability to forecast 
solar activity. 

How well can solar activity be fore- 
cast? It is fairly safe to say that fore- 
casting cannot be done well enough 
for the operator to place full reliance 
on it. Predictions can be used to ad- 
vantage, but the operator knows he 
must have alternatives available to 
compensate for an incorrect forecast. 
As a rough approximation (doubtless 
open to challenge), no better than one 
out of every two major, geophysically 
significant solar events can be fore- 
cast 24 hours in advance. Addition- 
ally, at least three forecasts of events 
that do not occur are issued for every 
forecast that proves accurate. The 
most valuable forecasting tool has 
proved to be persistence. If a region 
of solar activity hasn't produced a 

major event, it probably won't. If a 
major event has occurred, another is 
likely to follow. Such factors as re- 
gion size and radio-brightness tem- 
perature, magnetic structures, and 
flare history have also proved of some 

One factor that complicates the 
forecast problem is that most research 
schemes attempt to predict large "so- 
lar flares." In reality, what the system 
operator or mission controller is in- 
terested in is the geophysically sig- 
nificant solar event, whether large or 
small. Experience has shown that 
most large solar flares are geophysi- 
cally significant, but some are not. 
Most small flares are of no conse- 
quence, but a disturbing percentage 

Forecasting of terrestrial proton 
events after a flare has occurred has 
been more successful, although it is 
not without limitations due to uncer- 
tainties and unavailability of relevant 
data. The storm of particles emitted 
by a flare takes a day or two to prop- 
agate from the sun to the earth, and 
this time interval permits a forecaster 
to analyze the diagnostic information 
contained in the electromagnetic 
emissions that accompanied the solar 
event. Analysis of radio-burst signa- 
tures and X-ray enhancements indi- 
cates whether the particles have been 
accelerated. Quantitative forecasts of 
the course and magnitude of the event 
are often possible. 

Other aspects of the space environ- 
ment are being forecast with varying 
degrees of success. The mean 10.7- 
centimeter radio flux from the sun is 
an input into high-altitude-density 
models; efforts to forecast it have 
been reasonably successful, in part 
because the parameter varies rather 
slowly. In contrast, practically no 
capability exists for forecasting vari- 
ability in the geomagnetic field, an- 
other important input; short-term 
prediction of geomagnetic storms is 
particularly difficult. 

Forecasts of ionospheric parameters 
for radio communicators have been 
made for many years. The field is 
quite extensive and complex. The 
Space Environment Laboratory, of 
NOAA, issues monthly, and some- 
times more frequent, outlooks on ra- 
dio propagation conditions. Monthly 
median predictions are generally ade- 
quate for most frequency-manage- 
ment applications, though significant 
improvements could be made by more 
frequent modification of the median 
predictions. Ability to forecast iono- 
spheric disturbances is closely tied to 
the ability, discussed earlier, to fore- 
cast geophysically significant solar 

Understanding of Operational 

Effective application of space en- 
vironment observations and forecasts 
requires, first, physical knowledge of 
the interaction between the environ- 
ment and the specific activity being 
supported. Equally important, the 
forecaster and the operator must de- 
velop an effective rapport, based on a 
thorough knowledge by the former of 
the latter's system or mission. All 
parties must recognize that many 
things can happen to man's space- 
related activities which are significant 
but which cannot be explained. 

The Direction of Future 
Scientific Effort 

There is a clear, continuing need to 
advance the state of the art of opera- 
tional solar and space-environmental 
support. Capabilities are already far 
from adequate, and the increasing 
sophistication of the activities that are 
affected requires a matching growth 
in capabilities. 

Future scientific efforts need to fo- 
cus on the following: 

1. Techniques to provide accurate 
long-range and short-term fore- 
casts of geophysically signifi- 



cant solar events. Basic research 
on the physics of events is re- 
quired to get away from the 
admitted limitations of statisti- 
cal techniques. 

2. Ionospheric forecasting and 
specification techniques, espe- 
cially in the area of short-term 
frequency management. The 
problem is especially acute dur- 
ing magneto-ionospheric storms 
and within polar latitudes. 

3. Modeling of high-altitude at- 
mospheric density that is dy- 

namic — i.e., which reflects 
hour-to-hour and day-to-day 

4. Better and more complete ob- 
servations of solar and geo- 
physical phenomena and tech- 
niques and hardware to process, 
format, and transmit data with 
minimal delay. 

5. Operationally useful work on 
the propagation conditions of 
energetic particles between the 
sun and the earth. 

Techniques to forecast geomag- 
netic disturbances accurately. 
The level of disturbance of the 
geomagnetic field must be re- 
lated to operationally signifi- 
cant applications such as the 
ionosphere and high-altitude 
neutral density. 

Finally, the researcher should 
not be satisfied with research 
alone. He must push his ad- 
vances into the realm of "de- 
velopment" and their applica- 
tion to the many activities of 





An Overview of Deep-Earth Chemistry and Physics 

We recall that the earth consists of 
three parts: a thin crust, five to forty 
miles thick; a "mantle," below the 
crust, extending a little less than half- 
way down to the center; and a core. 
(See Figure II-l) The crust is the 
heterogeneous body on which we live 
and grow our food, and from which 
we derive all mineral resources, metals, 
and fuels. It is the only part of the 
earth that is accessible and directly 
observable; the composition of the 
mantle and core must be inferred from 
observations on the surface. 

The crust, the oceans, and the 
atmosphere above them form the en- 
vironment in which we live. This 
environment has been shaped through 
geologic time and continues to be 
shaped by forces which originate in 
the mantle beneath it. Its nature and 
the processes that occur in it mold the 
environment and determine what part 
of the surface will be land and what 
part sea, which oceans will expand 
and which contract, which continents 
will move apart and which come to- 
gether. Forces mainly within the 
mantle determine where mountains 
will rise, where stresses will cause 
rocks to fracture and flow, where 
earthquakes will occur, how intense 
and how frequent they will be. 
(Earthquakes, it may be recalled, 
have killed more than one million 
people in this century.) Most vol- 
canoes have their source in the 
mantle. They destroy towns and 
crops; the gases and solid particles 
they discharge into the atmosphere 
contribute significantly to atmospheric 
"pollution," in the form, for instance, 
of huge amounts of sulfur oxides; at 
the same time, volcanoes provide the 
very ingredients (water, carbon di- 
oxide) without which life would be 


This idealized view of the interior of the earth shows the distance in kilometers from 
the surface to the several regions. This view is admittedly simplified; as time goes 
on, our knowledge of the structure of the earth's interior will undoubtedly become 
more detailed and complex. 

This interaction of crust and mantle 
cannot be overemphasized. The whole 
of the environment, the total ecology, 
is essentially a product of mantle 

Participation of the core in crustal 
affairs is much less clear. At the 
moment the core is of interest mainly 
as the source of the earth's magnetic 
field; but there are reasons to believe 
that it may yet play a more funda- 
mental role in the earth's economy, 
perhaps as a source of gravitational 
energy or perhaps in converting some 

of the earth's kinetic energy of rota- 
tion into heat. 

Problems and Methodologies 

Problems of the deep interior are 
essentially (a) to determine the chemi- 
cal composition and physical nature of 
the materials composing the mantle 
and core, which are nowhere acces- 
sible to direct observation, and (b) to 
determine the distribution and nature 
of the energy sources and forces that 



cause deformation, flow, and vol- 

What observations, and what meth- 
ods of study, do we have? 

Seismic Waves — Seismic (elastic) 
waves are propagated between the 
focus of an earthquake and receivers 
(seismographs) appropriately located 
on the earth's surface. The speed of 
propagation depends on the physical 
properties of the propagating mate- 
rial; this knowledge of speed versus 
depth within the earth provides a clue 
as to variations of physical proper- 
ties — hence, of composition — with 
depth. A serious problem arises in 
that physical properties are sensitive 
to pressure, and pressures inside the 
earth greatly exceed those that can 
conveniently be created in the labora- 
tory for the purpose of studying their 
effects on physical properties. High 
pressures, of the order of those exist- 
ing in the core, can be created by 
means of explosive shock waves, but 
precise measurement of physical prop- 
erties under shock conditions remains 
exceedingly difficult and costly. 

Measurable Properties of the Eartli 
as a Whole — Properties such as 
earth's total mass, its moment of in- 
ertia (best determined from observing 
the motion of artificial satellites), and 
the frequency of its free oscillations 
(i.e., the "tone" at which the earth 
vibrates, like a struck bell, when dis- 
turbed by a sufficiently violent earth- 
quake) provide constraints on density 
distribution and physical properties in 
the form of global averages. For in- 
stance, the variation in physical prop- 
erties with depth deduced from seis- 
mic studies must average out to the 
values deduced from these global con- 

Lava — The nature of molten mate- 
rial (lava) that rises from the mantle 
and spills out on the surface (from 
volcanoes) provides information on 
the chemical nature of the source. The 
problem is not straightforward, how- 
ever, for the chemical composition of 

the liquid that forms by partial melt- 
ing of a system as complicated as 
ordinary rock is not generally the 
same as that of the parent rock; it 
varies, moreover, as a function of 
pressure and temperature. A great 
deal of painstaking experimental work 
at high pressure is required before the 
chemistry of the earth's mantle will 
be understood. 

The "Heat Flow" — The heat that 
escapes across the surface of the 
earth from the interior provides in- 
formation on the distribution of heat 
sources and temperature within the 
earth. (This heat flow, incidentally, 
amounts to some 30 million mega- 
watts and is equivalent to the output 
of about 30,000 large modern power 
plants.) The manner in which this 
heat is transferred within the earth is 
not precisely known; it is generally 
believed that transfer in the deep in- 
terior is mostly by "convection": 
mass motion of hot stuff rising while 
an equivalent amount of cold stuff 
is sinking elsewhere. Convection is 
generally believed to provide a mech- 
anism to move the crust. Earthquakes 
may well be the expression of strains 
set up by this motion, and the geo- 
graphic distribution of earthquakes 
may reflect the present pattern of con- 
vective flow in the mantle. 

The chief problems are: (a) How 
does the solid mantle flow? How do 
we best describe its response to me- 
chanical forces? (b) What flow pat- 
tern do we expect in a body as com- 
plicated and heterogeneous as the 
earth? We are faced here with some 
difficult mathematical problems in 
fluid dynamics. It must also be re- 
membered that, contrary to the com- 
mon state of affairs in engineering 
studies of fluid dynamics, where the 
initial conditions are precisely stated 
and controllable, conditions in the 
earth regarding flow properties and 
distribution of heat sources and tem- 
perature are not known and must be 
deduced from a comparison of theory 
with geological or geophysical obser- 

Sea-Floor Spreading — Geological 
studies regarding past history of the 
earth provide information as to what 
has happened. Most importantly, they 
provide information as to the rate at 
which the crust deforms or moves, the 
sense of its motion, and its duration. 
This is essential input to the solution 
of the dynamical problems mentioned 
in the previous paragraph. 

In this respect, the last decade has 
seen what may well be the most im- 
portant and far-reaching development 
since the days of Hutton (1795). 
What has now become known as 
"sea-floor spreading" (or "plate tec- 
tonics" or "global tectonics") is the 
general proposition that new oceanic 
crust is constantly generated from the 
mantle along submarine ridges while 
an equivalent amount of crust is re- 
sorbed into the mantle at other places; 
in between, the whole crust moves at 
rates of a few inches per year. This 
general pattern of motion provides an 
important and much-needed clue to 
the behavior of the mantle. 

Phase Cliauges — It is well known 
in materials science that, at high pres- 
sure or temperature, substances may 
occur under forms with properties 
quite different from those of the same 
substance under normal conditions 
("phase changes"). A typical example 
is that of common carbon that occurs 
either as graphite (a soft material used 
for lubrication) or diamond (the hard- 
est known mineral). It is now clear 
that phase changes do occur in the 
mantle, the lower half of which has 
properties quite different from those 
of its upper half even though its gross 
chemical composition may be roughly 
the same. Again, a very large amount 
of difficult experimentation on high 
pressure is needed to ascertain the 
form under which common minerals 
could occur in the earth's deep in- 
terior. It is not unlikely that such 
studies could lead to the discovery 
and synthesis of new materials of 
engineering importance; for exam- 
ple, very hard substances might be 


Evaluation of Present Knowledge 

In spite of recent advances, our 
ideas and knowledge of the deep in- 
terior remain largely qualitative. We 
know roughly, but not exactly, what 
the mantle consists of. We suspect 
that phase changes occur at certain 
depths, but we cannot pinpoint the 
exact nature of these changes. We 
can estimate roughly how much heat 
is generated in the mantle and its 
source (mostly radioactive disintegra- 
tion), but cannot tell yet how the 
sources of heat and temperature are 
distributed. We do not have precise 
information as to the mechanical and 
flow properties of the mantle, and 
haven't yet solved the mathematical 
equations relevant to convection, even 
though approximate solutions have 
been found. We don't even know 
whether the whole mantle, or only its 
upper part, participates in the motion. 

The situation regarding the core is 
similarly vague. We know that it con- 
sists dominantly of iron, but cannot 
determine what other elements are 
present. We know that the outer two- 
thirds of the core is liquid, and that 
motion in this metallic liquid gener- 
ates the earth's magnetic field, but the 
details of the process are still obscure, 
and the full set of equations that 
govern the process has not yet been 
solved. Important physical properties 
of the core, such as its electrical con- 
ductivity, are still uncertain by one 
or more powers of ten. The source of 
the energy that drives the terrestrial 
dynamo is still obscure. Even though 
we suspect that motions in the core 
are the cause of observable effects at 

the surface (e.g., irregular changes in 
the length of the day, small periodic 
displacements of the earth with re- 
spect to its rotation axis), we still can- 
not assess these effects qualitatively. 
We suspect interactions between the 
core and mantle which ultimately af- 
fect the crust, but cannot focus pre- 
cisely on any of them. 

Goals and Requirements for 
Scientific Activity 

To understand our total environ- 
ment, to see how it came to be the 
way it is, and how it is changing from 
natural — as opposed to human — 
causes, and to control it to our best 
advantage (e.g., by curtailing earth- 
quake damage or by muzzling dan- 
gerous volcanoes with due regard to 
their positive contributions to human 
ecology), we need a better under- 
standing of the constitution and be- 
havior of the deeper parts of the 
earth. To reach this understanding re- 
quires a concerted and sustained ef- 
fort in many directions, encompassing 
a wide range of scientific disciplines, 
from fluid mechanics to materials sci- 
ence, from electromagnetic theory to 
solid-state physics. 

Observational Networks — If we 
can foretell the future from the recent 
past, it is clear that a key to further 
progress is the establishment and 
maintenance of a first-rate global net- 
work of observatories such as the 
worldwide network of standard seis- 
mographic stations established under 
the VELA program of the Department 
of Defense. This network has enabled 

seismologists to determine more pre- 
cisely than ever before just where 
earthquakes occur, and has brought 
into sharp focus a remarkable corre- 
lation between earthquakes and other 
geological features that had only been 
dimly perceived. This correlation is 
fundamental to the notion of global 
tectonics. Among other examples of 
progress resulting from improvement 
in instrumentation, one can mention 
the determination of the depth in the 
mantle at which some of the phase 
changes occur and refinements in the 
fine structure of the inner core. 

Deep Drilling — Much speculation 
could be avoided, and much informa- 
tion gained, from analysis (of the type 
to which lunar samples are subjected) 
of samples from the mantle obtained 
by deep drilling. Drilling through the 
sedimentary cover of the ocean floor 
has already been most rewarding in 
its confirmation of the relative youth 
and rate of motion of the oceanic 
crust. But more is needed, and deeper 
penetration through the crust into the 
mantle at several points will eventu- 
ally become necessary. 

High-Pressure Experiments — 
Finally, it would seem that a major 
effort should be made to gain more 
knowledge of the properties and be- 
havior of materials subjected to pres- 
sures of the order of those prevailing 
in the deep interior (tens of millions 
of pounds per square inch). Too much 
is now left to guessing; solid-state 
theory is presently inadequate and 
would anyhow need experimental 

A Note on the Earth's Magnetic Field 

The earth's magnetic field was one 
of the earliest subjects of scientific 
inquiry. The field's obvious utility 
in navigation, as well as the intrinsic 
interest of the complex phenomena 

displayed, have led people to study it 
ever since the sixteenth century. 

The study of the earth's field di- 
vides into two parts: that of the main 

part of the field, which changes only 
slowly (over hundreds of years), and 
that of the rapid variations (periods of 
seconds to a year). The latter are 
caused by things that happen in the 



upper atmosphere and in the sun; 
they are largely the concern of space 
research. The former, the slowly 
varying field, is the subject considered 

Interest in the slowly varying field 
has been greatly increased by the 
realization that it has frequently re- 
versed in the past. (See Figure II-2) 
The reversals have been helpful in 
establishing the history of the oceans 
and the movements of the continents. 
The study of the magnetic field can be 
expected to contribute — indeed, is 
beginning to contribute — to the 
search for oil and minerals. Measure- 
ment of the magnetic field has be- 
come one of the principal tools for 
studying the earth. 

The Origin of the Earth's 
Magnetic Field 

These applications lend a new in- 
terest to the origin of the field itself. 
A theory as to its origin has proved 
hard to find. Only in our own day has 
anything plausible been suggested. 
Although a theory of the origin has 
no discernible immediate practical 
importance, it is a part of the story of 
the earth without which we cannot 
be said to understand what is going 
on. Maybe we can get on very well 
without understanding, but one feels 
happier if important practical tech- 
niques have a proper theoretical un- 

The difficulty of discovering the 
origin of the slowly varying field is 
due largely to its lack of relation to 
anything else. It is not related to 
geology or geography and goes its 
own way regardless of other phenom- 
ena. In some places, some of the 
changes are due to the magnetization 
of rocks near the surface of the earth, 
but this is not the case over most of 
the field. 

Fashionable theory holds that the 
magnetic field is produced by a dy- 
namo inside the earth. The earth has 
a liquid core; the motions in this core 



















Field as at Present 
Field Reversed 

This figure shows reversals in the polarity of the earth's magnetic field, a phenome- 
non of global extent that is known to occur but has never been witnessed. These 
data are derived from measurements of the direction (N-S) of magnetism frozen into 
lava as it hardens. The effects of the reversals are unknown. 



are believed to cause it to act as a dy- 
namo and to produce electric currents 
and magnetic fields. The theory of the 
process is one of the most difficult 
branches of theoretical physics. Its 
study is closely related to a wide 
range of problems concerning the mo- 
tions of liquids and gases in the 
presence of magnetic fields, especially 
to the problems of generating thermo- 
nuclear energy. 

But no realistic treatment of the 
earth's dynamo has yet been given. 
The subject does not require a 
large-scale organized attack. It needs 
thought and ideas that will come from 

a few knowledgeable and clever peo- 
ple. It is a subject for the academic, 
theoretical physicists with time to 
think deeply about difficult problems 
and with access to large computing 

Perhaps the field's oddest feature is 
that, as already noted, it occasionally 
reverses direction (most recently, per- 
haps, about 10,000 B.C.). Some scien- 
tists have suggested that these rever- 
sals have profound biological effects, 
that whole species could become ex- 
tinct, perhaps as a result of a large 
dose of cosmic rays being let in as the 
reversal takes place. There is some 

observational evidence to support this 
thesis. Other scientists do not believe 
that this would happen, however, and 
do not regard as conclusive the ob- 
servational evidence for extinction of 
species at the time of reversal. The 
matter is clearly of some importance. 
At various times in the past, the 
majority of all forms of life are known 
to have been rather suddenly ex- 
tinguished, and this is a phenomenon 
we would do well to understand. If 
reversals of the magnetic field might 
play a part in this drama, then we 
have added reason for understanding 
their cause and their effects. 



Continental Drift and Sea-Floor Spreading 

The idea that continents move about 
on the surface of the earth was ad- 
vanced about a century ago and has 
always had adherents outside of the 
United States. In this country, where 
no direct evidence existed, the concept 
was first greeted with skepticism and 
then, for fifty years, was viewed as 
nonsense. All American thinking in 
geology — economic or academic — 
was built on the alternative concept of 
a relatively stable earth. 

This is now changed. The past few 
years have seen a basic revolution in 
the earth sciences. Continents move, 
and the rates and directions can be 
predicted in a manner that few would 
have dreamed possible but a short 
while ago. As a consequence, all as- 
pects of American geology are being 
reinterpreted, and in most cases they 
are being understood for the first 

The continents are an agglomera- 
tion of superposed, deformed, melted, 
and remelted rocks with differing ages 
covering a span of three billion years. 
These rocks are eroded in some places 
and covered with sediment in others. 
The end result is a very complex con- 
figuration of rocks with a history that 
has not been deciphered in more than 
a century of effort by land geologists. 

The ocean basins are very different. 
The rocks are all relatively young; 
they are simply arranged and hardly 
deformed at all; erosion and deposi- 
tion are minimal. Consequently, it 
was logical that their history would be 
unraveled before that of the conti- 
nents if only someone would study it. 

Oceanographers have been engaged 
in just such a study for about two 

decades. It has been enormously ex- 
pensive compared to continental ge- 
ology — and trivially cheap compared 
to lunar exploration. Thus, the haunt- 
ing possibility exists that the same 
effort on the continents might have 
yielded the same results despite the 
complexities. In the event, the break- 
through was made at sea by the in- 
vention of a whole new array of in- 
struments and the development of a 
system of marine geophysical ex- 
ploration. The results have now been 
synthesized with those from the land 
to provide the data for the ongoing 
revolution in the earth sciences. 

Present understanding was achieved 
in what in retrospect seems a curious 
sequence. We depart from it to pre- 
sent the new ideas in more orderly 

Continental Drift 

First, a little geometry. The move- 
ment of a rigid, curved plate over the 
surface of a sphere can occur only as 
a rotation around a point on the 
sphere. This simple theorem, stated 
by Euler, has been the guide for much 
that follows. Earthquakes on the sur- 
face of the earth are distributed in 
long lines that form ellipses and cir- 
cles around almost earthquake-free 
central regions. If there are plates, it 
is reasonable to assume that they cor- 
respond to the central regions and 
that the earthquakes are at the edges 
where plates are interacting. 

The motion of earthquakes confirms 
the existence of plates with marvelous 
persuasiveness. Euler's theorem speci- 
fies the orientation of earthquake mo- 
tion, and it has been confirmed for 

many plates. Moreover, for any given 
plate the earthquakes in front are 
compressional as two plates come to- 
gether; they are tensional in the rear, 
where plates are moving apart. Along 
the sides they are as expected when 
one plate moves past another. (See 
Figure II-3) 

The knowledge that plates exist 
and are moving has immediate im- 
portance with regard to such matters 
as earthquake prediction. For ex- 
ample, one boundary between two 
plates runs through much of Cali- 
fornia along the San Andreas Fault. 
We can measure the rate of offset in 
some places and we now know that 
related offsets must occur everywhere 
else along the plate boundaries. 

Earthquakes indicate that moving 
plates exist now. But the evidence 
that they existed in the past is of 
a different sort. This comes from a 
vast array of geological and geophysi- 
cal observations — topographic, mag- 
netic, gravity, heat flow, sediment 
thickness, crustal structure, rock 
types and ages, and so on. Integra- 
tion of these observations indicates 
that, where plates move apart, new 
igneous rock rises from the interior 
of the earth and solidifies in long 
strips. These in turn split apart and 
are consolidated into the trailing edges 
of the two plates. Because the rising 
rock is hot, it expands the trailing 
edges of the plates. The expansion 
elevates the sea floor into long central 
ridges, of which the Mid-Atlantic 
Ridge is but a part. As the new strip 
of plate moves away, it cools and 
gradually contracts. The cooling and 
contraction cause the sea floor to sink. 
This is why the ridges have gently 
sloping sides that gradually descend 






This diagram shows the six major "plates" of the earth. The double lines indicate 
zones where spreading or extension is taking place. The single lines indicate zones 
where the plates are converging or compression is taking place. Earthquake 
activity is found wherever the plates come in contact. 

into the deep basins. The basins are 
merely former ridges. 

Sea-Floor Spreading 

The magnetic field of the earth re- 
verses periodically, and a record of its 
polarity is forever preserved in the 
orientation of magnetic minerals in 
volcanic rocks which cooled at any 
particular time. This apparently unre- 
lated fact gives us clues to the motion 
of the plates. The new rocks at the 
trailing edges of the plates record the 
magnetic polarity like a tape recorder 
and then, like a magnetic tape, they 
move on and the next polarity change 
is recorded. This occurs in each of the 
plates moving away from their com- 
mon boundary. As a result, the sea 

floor in the Atlantic, for example, is 
a bilaterally symmetrical tape record- 
ing of the whole history of the earth's 
magnetic field since the basin first 
formed as a result of Africa and South 
America splitting apart. We usually 
read a tape recording by moving the 
tape, but the stereo records of the 
oceans are read by moving a ship or 
airplane with the proper instruments 
over the sea floor. From work on 
land and at sea, the changes in mag- 
netic polarity have been dated. We 
can thus convert the magnetic records 
into age-of-rock records and prepare 
a geological map of the sea floor. In 
the Atlantic, to continue with the pre- 
vious example, the youngest rocks are 
in the middle; they grow progres- 
sively older toward the continents. 

The remains of sea-surface micro- 
organisms rain constantly onto the 
sea floor to form layers of ooze and 
clay. Where the crust is young, the 
layers are thin; they thicken where 
they have had more time to accumu- 
late. The very youngest crust, which 
has just cooled, is exposed as black, 
glossy, fresh rock of the type seen 
in lava flows in Iceland and Hawaii. 
The outpouring of lava occurs at a 
relatively constant rate but the plates 
spread apart at different rates depend- 
ing on the geometry. Consequently, 
lava piles up into very long volcanic 
ridges with a relief that varies with 
the spreading rate. A slow spreading 
rate produces mountainous ridges; 
fast spreading produces low, gently 
sloping, but very long hills. 



Most of the remaining topography 
of the sea floor is in the form of 
roughly circular volcanoes which can 
grow as large as the island of Hawaii. 
These volcanoes remain active for 
tens of millions of years and during 
that time they drift as much as a 
thousand miles. If they develop on 
young crust, they necessarily sink 
with the crust as it cools. This seems 
to be the explanation for the drowned 
ancient islands commonly found in 
the western Pacific. Once they were 
islands, but now they are as much as 
a mile deep. 

The phenomena that occur where 
two plates come together are natu- 
rally different from those that occur 
where they spread apart. If the plates 
come together at rates of less than a 
few inches per year they seem to 
crumble and deform into young moun- 
tain ranges. Where they come to- 
gether faster, the deformation cannot 
be accommodated by crumbling. In- 
stead, the plates overlap and one of 
them plunges deep into the interior 
where it is reheated and absorbed. 
This produces the most intense de- 
formations on the surface of the 
earth. A line of fire of active vol- 
canoes, deep depressions of oceanic 
trenches, and swarms of earthquakes 
mark the line of junction. Ancient 
rocks of the continents are now being 
reinterpreted as having once been 
deep-sea and marginal-sea sediments 
that were deposited where plates 
came together. They occur in central 
California, the Alps, and many other 
mountain ranges. Typically, they are 
highly deformed, which seems quite 
reasonable considering what must 
happen when plates smash together. 

Implications of the 
New Knowledge 

Since the sea floors are young, con- 
tinental rocks contain what records 
may exist of ancient plate motions. 
The present revolution in understand- 
ing was needed to serve as a guide to 
geological exploration, however. Land 

geologists had long noted similarities 
between the rocks on opposite sides 
of the South Atlantic. In the past few 
years, many more confirming correla- 
tions have been discovered. Knowing 
that the continents were once joined, 
we can reconstruct, in the mind's eye, 
a history in which they were once but 
a small distance apart and the nascent 
Atlantic was a narrow trough. 

We should pause for a moment to 
consider how important to economic 
resource development the new ideas 
may be. The continental shelves of 
Atlantic coastal Africa and South 
America, for example, contain salt de- 
posits and sediments in thick wedges 
that seem to lack any dam to trap 
them. These deposits contain oil. We 
can imagine the difficulties American 
oil geologists had in interpreting their 
records and predicting where to drill 
when they had no idea of how the 
oil-bearing rocks accumulated. How 
easy it may now become, when their 
origin can be readily explained as 
occurring in the long narrow trough 
of the newborn Atlantic! 

Until now exploration has been 
adequate to demonstrate the existence 
of continental drifting and global de- 
formation, but much remains to be 
done to flesh out the reconstruction of 
the history. If the exploration at sea 
continues and is matched by compa- 
rable effort at continental margins 
and on land, we may hope to see the 
beginning of a deep new understand- 
ing of the earth. 

Continental Structures and 

Our knowledge of continental 
crustal processes, except in the vicinity 
of the continental margins, has lagged 
behind our knowledge of oceanic 
crustal processes. One reason for the 
great progress in the study of oceanic 
crustal processes is the beautifully 
simple pattern of magnetic anomalies, 
magnetic-field reversals, and earth- 
quake and volcanic activity in the 

vicinity of the continental margins 
that led to the discovery of sea-floor 
spreading. But another reason must 
be that the earth scientists who made 
these advances were not inhibited by 
the traditions and prejudices of scores 
of years of separation into highly 
compartmentalized sub-disciplines. Of 
course, the continental crust is com- 
plex, and simple patterns, if they 
exist, are obscured by the geological 
and geophysical scars of billions of 
years of continental damage and re- 
building. But the attitudes and study 
methods that led to the discovery of 
sea-floor spreading and downward- 
plunging plates will be needed if we 
are to improve our knowledge and 
understanding of continental proc- 
esses during the 1970's. 

Structure of the Continental Crust 

The average thickness of the con- 
tinental crust of the United States, as 
determined by seismic measurements, 
is 41 kilometers; its volume is about 
40,000 x 10 4 km 3 . The average 
crustal thickness in the west of the 
Rocky Mountains is 34 kilometers, 
while the average thickness to the 
east of the mountains is 44 kilometers. 

The volume of the western crust is 
only about 10,000 x 10 4 km :i . Thus, 
the western crust accounts for only 
one-fourth of the continental total by 
volume, as compared to 30,000 x 10 4 
km 3 for the eastern crust, although its 
surface area is almost a third of the 
total. Average seismic velocities also 
suggest that the western crust is less 
dense than the eastern crust. Thus, 
the western crust — the portion of the 
crust in which continental dynamic 
processes (earthquakes, volcanic erup- 
tions, magmatism, ore deposition, and 
mountain-building) have been active 
during the past 100 million years or 
so — is the lesser fraction of the con- 
tinent in terms of volume and mass. 

Further Western-Crust Data — A 
recent reinterpretation of a network 
of 64 seismic-refraction profiles re- 



corded by the U.S. Geological Survey 
in California, Nevada, Idaho, Wyo- 
ming, Utah, and Arizona from 1961 
to 1963 indicates that crustal thick- 
ness reaches maxima under the Sierra 
Nevada Range (42 km.), the Trans- 
verse Ranges of southern California 
(37 km.), and in southwestern Nevada 
(36 km.). The crust is relatively thin 
under the Coast Ranges of California 
(24 to 26 km.), the Mojave Desert 
(28 km.), and parts of the central 
Basin and Range Province in Nevada 
and Utah (29 to 30 km.). The base 
of the crust dips generally from the 
Basin and Range Province toward 
greater depths in the Colorado Plateau 
(43 km.), the middle Rocky Mountains 
(45 km.), and the Snake River Plain 
(44 km.). A velocity boundary zone 
between the upper and lower crust 
can be well determined only beneath 
the middle Rocky Mountains, the 
Snake River Plain, and the northern 
part of the Basin and Range Province. 
The average velocity of the western 
crust is low, typically about 6.1 to 6.2 
kilometers per second, but signifi- 
cantly higher in the Colorado Plateau 
(6.2 to 6.5 km/sec), and the Snake 
River Plain (6.4 km/sec). Upper- 
mantle velocity is less than 8.0 kilo- 
meters per second under the Basin and 
Range Province, the Sierra Nevada, 
and the Colorado Plateau and equal 
to or greater than 8.0 kilometers per 
second under the Coast Ranges of 
California, the Mojave Desert, and 
the middle Rocky Mountains. 

Recent refraction work indicates 
that the average crustal velocity in the 
Columbia Plateau is high, as expected, 
but that the crust is about 10 to 15 
kilometers thinner than it is in adja- 
cent areas. Thus, the Columbia Pla- 
teau has seismic properties similar to 
those of a somewhat overthickened 
oceanic crust. So does the Diablo 
Range of California. The Franciscan 
formation (a metamorphosed struc- 
ture) exposed in the Diablo Range, 
believed by many geologists to have 
been deposited in a Mesozoic oceanic 
trench, apparently extends to a depth 
of 10 to 15 kilometers; it was de- 

posited directly on a basaltic crust 
that now extends to a total depth of 
about 25 kilometers. 

Composition of the 
Continental Crust 

In a rock of a given composition, 
both metamorphic degree and water 
content affect seismic velocities at 
various pressures and temperatures. 
Recent laboratory research in the 
United States suggests that pressures 
and temperatures at most crustal 
depths would place the rocks within 
the stability field of eclogite rather 
than basalt. Seismic velocities in the 
lower crust formerly interpreted as 
appropriate for basalt are therefore 
regarded by many petrologists as 
more appropriate for more silicic rock. 
However, the presence of significant 
amounts of water in the lower crust 
would produce abundant hydrous 
minerals in a rock of basaltic com- 
position; this would result in seismic 
velocities similar to those in the lower 

Given such uncertainties, it seems 
that the only positive assertion that 
can be made about the average com- 
position of the continental crust is 
that it is intermediate and probably 
not too different from monzonite. The 
lower crust may be basaltic, interme- 
diate, or even silicic, and the most re- 
liable guide to its composition is 
probably geologic association. For ex- 
ample, in the Snake River Plain, where 
basalt is exposed at the surface, it 
seems reasonable to interpret the high 
velocities of the lower crust as indica- 
tive of basalt. In other areas, the 
higher velocities should probably be 
regarded as indicative of intermediate 

Structure and Composition of the 
Upper Mantle 

Seismic probing of the upper man- 
tle has established the existence of 
two important velocity transition 
zones: one at a depth of about 400 

kilometers, in which magnesium-rich 
olivine is transformed with increasing 
pressure to spinel; and another at a 
depth of about 650 kilometers, in 
which spinel is presumed to be trans- 
formed to compact oxide structures 
with increasing pressure. Recent esti- 
mates of the density of the uppermost 
mantle, based on statistical models 
using all available evidence, yield 
densities of 3.5 to 3.6 grams per cm 3 , 
significantly higher than the densities 
deduced from the usual velocity- 
density relations. Rocks of this den- 
sity and the seismic velocities ob- 
served just below the Mohorovicic 
discontinuity could be either eclogite 
or iron-rich peridotite. Given the 
lateral heterogeneity of the upper 
mantle indicated by the variable seis- 
mic velocities, it seems most reason- 
able to regard the upper mantle as 
grossly heterogeneous, consisting pri- 
marily of peridotite but with large 
lenses, or blocks, of basaltic, eclogitic, 
intermediate, and perhaps even silicic 
material distributed throughout. 

There is much seismic evidence that 
a low-velocity zone for both P- and 
S-waves exists in the upper mantle in 
the western third of the United States, 
with a velocity minimum at a depth of 
100 to 150 kilometers. This zone 
seems to be particularly pronounced 
in the Basin and Range Province. The 
low-velocity zone for P-waves is ap- 
parently absent or greatly subdued in 
the eastern two-thirds of the United 
States. The most likely explanation 
for the low-velocity zone is that the 
mantle rocks there are partially 

Continental Margin Processes 

The interaction of the laterally 
spreading sea floors with the con- 
tinental margins — resulting in the 
downward plunging of rigid litho- 
spheric plates beneath the continents, 
accompanied by shallow- to deep- 
focus earthquakes and volcanic ac- 
tivity — has been elucidated by a 
beautiful synthesis of geological and 



geophysical evidence. The new 
"global tectonics" appears to provide 
an adequate explanation of the struc- 
ture and continental-margin processes 
of most of the circum-Pacific belt. 

Along the California coast the pat- 
tern is different, however. No oceanic 
trench lies seaward of California, and 
the earthquakes are confined to nar- 
row, vertical zones beneath the San 
Andreas fault system to depths that 
do not exceed about 15 kilometers. 
Thus, the brittle behavior of the 
crust in coastal California is confined 
roughly to the upper half of the crust. 
Fault-plane solutions of the earth- 
quakes occurring along the San An- 
dreas fault system are predominantly 
right-lateral strike-slip, but some solu- 
tions indicating vertical fault move- 
ments are also obtained. As already 
noted, both geological and geophysi- 
cal studies suggest that the Mesozoic 
Franciscan formation of the Coast 
Ranges was deposited in an oceanic 
trench. These observations and in- 
ferences are compatible with the con- 
cept of a westward-drifting continent 
colliding with an eastward-spreading 
Pacific Ocean floor, resulting in conti- 
nental overriding of the Franciscan 
Trench and the East Pacific Rise and 
development of the San Andreas sys- 
tem as a complex transform fault. The 
pattern of these relations is not tidy, 
however, and many problems remain 
to be solved in unraveling the struc- 
ture and continental-margin processes 
of California. 

Isotopes and the Evolution and 
Growth of Continents 

Lead and strontium isotopic studies 
of continental and oceanic rocks com- 
pleted during the past decade have 
contributed greatly to a better under- 
standing of processes involved in the 
growth and development of conti- 
nents through geologic time. The 
studies of continental igneous rocks 
indicate addition of primitive (mantle- 
derived) material and hence support 
the concept of continental growth. 

Lead isotope studies of feldspars rep- 
resenting significant volumes of crus- 
tal material place constraints on the 
rate of transfer of uranium, thorium, 
and lead from the mantle to the crust 
and suggest early development (3,500 
to 2,500 million years ago) of a sig- 
nificant portion of the crust. Geochro- 
nologic studies of Precambrian rocks 
show that at least half of the North 
American crust was present 2,500 
million years ago, lending support to 
this thesis. 

Lead and strontium data obtained 
on young volcanic rocks in the oceanic 
environment have provided direct 
information on the existence of sig- 
nificant isotopic and chemical hetero- 
geneities in the upper mantle. Sys- 
tematics provided by these decay 
schemes allow estimates on the times 
of development and preservation of 
these chemical heterogeneities, many 
of which must have been generated in 
Precambrian time. If a dynamic crust- 
mantle system is assumed, the data 
for the oceanic environment can be 
interpreted as reflecting events related 
to the development and growth of 
continental regions. 

Studies of volcanic rocks being 
erupted at the continental margins 
allow an isotopic evaluation of the 
concept of ocean-plate consumption 
in this environment. Lead isotopes in 
volcanic rocks of the Japanese arc are 
compatible with partial melting of 
the underthrust volcano-sedimentary 
plate. Strontium isotopes in calc- 
alkaline rock series have placed sig- 
nificant constraints on the basalt- 
hybridization theory and the concept 
of partial melting of older crust, how- 

Local studies of lead and strontium 
in continental igneous rocks have al- 
lowed evaluation of the involvement 
of crustal material in the genesis and 
differentiation of these rocks. The 
studies are circumscribed, however, 
by the lack of chemical and isotopic 
knowledge of the lower crust. If the 
isotopic anomalies of some conti- 

nental rocks are related to generation 
in, or assimilation of, the lower crust, 
this region must be characterized 
by low uranium/lead and rubidium/ 
strontium ratios relating to earlier 
depletion of uranium and rubidium, 
perhaps at the time of initial crustal 

Although these isotopic studies 
have provided many answers, they 
have also generated new questions 
and problems. Continued work on 
oceanic volcanic rocks and ultra- 
mafic rocks of mantle mineralogy are 
needed. High-pressure experimental 
work to determine trace-element par- 
titioning in the mantle is needed to 
make full use of the isotopic variations 
that have been observed. Chemically 
and isotopically, less is known about 
the lower crust than the upper mantle 
and upper continental crust. Direct 
sampling of this environment is a dis- 
tinct possibility with modern drilling 
technology; it would provide sorely 
needed information not only from the 
isotopic standpoint but also for many 
other earth-science disciplines. 

Tectonics and the Discovery of 
Mineral Deposits 

Adequate supplies of mineral raw 
materials are essential to our econ- 
omy, but they are becoming increas- 
ingly difficult to find as we are forced 
to seek ore deposits that offer only 
subtle clues to their existence and lo- 
cation. The science of ore exploration 
is advancing rapidly, however. And 
as it does, more is being learned of 
the basic principles controlling the oc- 
currence and distribution of ore de- 
posits and their relation to continental 
structures. Economic geologists are 
increasingly adept at predicting where 
deposits are apt to occur — where in 
terms of geologic and tectonic envi- 
ronment and where in terms of geo- 
graphic areas. 

The essential first step toward in- 
creasing our knowledge in this field is 
to plot known mineral deposits and 



districts on a geologic-tectonic map. 
This effort is well under way. Ameri- 
can geologists are participating in an 
international committee for the Geo- 
logic Map of the World, sponsored 
by a commission of the International 
Union of Geological Sciences which is 
compiling a world metallogenic map. 
A first version of the North America 
map has been completed. 

Although the scale of the map 
(1 : 5,000,000) necessitates severe 
condensation of data, the general dis- 
tribution of many ore types can be 
represented and compared. For ex- 
ample, the relation of the strata-bound 
massive sulfide deposits to volcanic 
(eugeosynclinal) belts of Precambrian 
rocks in the Shield, Paleozoic rocks in 
the Appalachians, and Mesozoic rocks 
in the Cordillera shows rather clearly. 
Nickel sulfide ores are distributed 
around the periphery of the Superior 
Province. A rather distinct class of 
magnetite-chalcopyrite replace- 
ment deposits in carbonate rock seems 
to follow the Cordilleran margin of 
the continent. Tungsten deposits lie 
east of the quartz-diorite line. Many 
epigenetic deposits in the interior of 
the continent seem related to trans- 
verse structures (lineaments), and a 
suggestion of zonation on a conti- 
nental scale seems to be emerging. 

Further refinement of the metal- 
logenic map is under way. In combi- 
nation with general studies of conti- 
nental processes and structures, this 
will enable exploration geologists to 
locate promising areas in which to 
search for additional mineral deposits. 

Needed Research on 
Continental Processes 

The Continental Margins — The 
concepts of global plate tectonics for 
the first time give earth scientists a 
general working hypothesis to explain 
the varied continental processes that 
characterize the mountain-building as- 
sociated with active continental mar- 
gins: transcurrent faulting, volcanism, 
thrust faulting, and the like. Clearly, 

an intensification and broadening of 
geological and geophysical research 
along continental margins such as the 
coastlines of California, Oregon, and 
Washington is critically needed. 

Geologic Processes in the Conti- 
nental Interior — All earth scientists 
recognize that the continental plates 
have been actively deformed, and that 
concepts of rigid continental plates 
must be modified in practice. In par- 
ticular, many students of the geology 
of the western United States recognize 
that the continental crust in and west 
of the Rocky Mountains has been ac- 
tively deformed over the past 100 mil- 
lion years or so, and is still being 
actively deformed in many places. 
Plate tectonics is not irrelevant, how- 
ever. Application of the attitudes and 
study methods that led to the con- 
cepts of global plate tectonics can be 
expected to lead to significant and 
dramatic advances in our knowledge 
of continental processes. 

If the westward-drifting continent 
overrode the eastward-spreading Pa- 
cific Ocean plate and continental mar- 
gin features such as oceanic ridges 
and trenches, where are these features 
now? Is the Basin and Range Province 
behaving similarly to a spreading 
ocean floor? If so, where are the 
spreading centers? Are they along 
the Wasatch Mountain front, or the 
Rio Grande rift zone? What dynamic 
continental processes are occurring 
east of the Rocky Mountains, and 
how do they relate to the active 
processes of the western crust? 

Seismic Monitoring — Seismology 
is now being focused in unprecedented 
detail on the active continental proc- 
esses along the California continental 
margin. A similar focusing of seis- 
mological effort on the earthquake 
zones of Washington and the conti- 
nental interior is needed. In particu- 
lar, intensification of seismological 
effort is recommended for the Ven- 
tura-Winnemucca earthquake zone of 
California and Nevada; the Rocky 
Mountain zone of Arizona, Utah, 

Idaho, Wyoming, and Montana; the 
Rio Grande rift zone of Colorado and 
New Mexico; the Mississippi Valley 
earthquake zone of Illinois and Mis- 
souri; and the earthquake zones of 
the New England region. Seismic 
monitoring should be accompanied by 
measurements of crustal strain and 
appropriate geological and geophysi- 
cal exploration of major crustal fea- 

Structural and sedimentary basins, 
in which large reserves of petroleum 
and other economic fuels and minerals 
are concentrated, are among the most 
prominent and significant geologic 
features of the continents, but the 
processes of their formation are poorly 
understood. An intensive three- 
dimensional study of all aspects of the 
development of one or more struc- 
tural and sedimentary basins through 
geologic time, relating that develop- 
ment to economic deposits and envi- 
ronmental assets and liabilities perti- 
nent to wise long-term use of the 
land, would make a great contribution 
to our knowledge of continental proc- 
esses. The beginning of such a study 
has been made in the Wind River 
Basin of central Wyoming, but this 
study has been concentrated mainly 
on the upper part of the earth's crust. 
Basin development is necessarily con- 
trolled by upper-mantle as well as 
crustal processes, and therefore de- 
tailed geophysical study of the deep 
crustal and upper-mantle foundations 
of one or more large basins is needed. 

Deep Continental Drilling — Our 
knowledge of the composition of the 
lower continental crust is clearly in- 
adequate. In addition to more detailed 
geophysical exploration of the deep 
crust, a program of deep continental 
drilling is critically needed. Locations 
for penetrating the lower crust that 
will be within the reach of present 
drilling technology can be selected 
from geophysical studies. 

Geochemical Research — Although 
we have good qualitative understand- 
ing of the major features of the geo- 



chemical cycles, we are still deficient 
in detailed quantitative knowledge of 
the geochemical cycles of practically 
all elements. Research on geochemical 
cycles of the elements, such as es- 
sential carbon, for example, should be 
intensitied. Research on the behavior 

of fugitive constituents (e.g., water 
and sulfur dioxide) in igneous and 
metamorphic processes is also criti- 
cally needed to improve our under- 
standing of continental geochemical 
processes and their relations to tec- 

If research on continental struc- 
ture and processes is intensified and 
strengthened, we can expect the 
1970's to be as exciting a decade of 
discovery for the continents as the 
1960's were for the oceans and the 
continental margins. 

Practical Implications of Major Continental Processes 

Recent verification that the crust of 
the earth moves readily over the 
earth's interior in the form of large 
sliding plates has reoriented geologi- 
cal thinking in a number of ways 
that affect our understanding of where 
many natural resources occur. We 
also have new insights into such 
natural hazards as biological extinc- 
tions, the development of ice ages, 
major earthquake belts, and regions 
of volcanism, to cite just a few natural 
hazards that are of continuing inter- 
est. In fact, the new ideas of conti- 
nental drift and sea-floor spreading 
have demanded a re-evaluation of 
many of the premises underlying the 
subjects of geology, geochemistry, 
oceanography, and long-term changes 
in atmospheric circulations. 

Resource Distribution 

Much of our information on geo- 
logical and geochemical distributions 
comes from a study of ancient sys- 
tems that have existed over great 
lengths of geological time. In many 
instances, it is clear that these ancient 
systems operated differently from 
those of today. It now appears that 
the earth's sliding-plate mechanism 
has caused relative motions between 
continental and oceanic regions, 
formed and destroyed ocean floors, 
developed mountain belts, and 
changed the positions of land masses 
with respect to the equator or to the 
poles in times that are short com- 
pared to the time it took to form 
many of our major natural resources. 

Evidence is building up that we are 
currently in a stage in earth history 
that is considerably more active than 
that pertaining over much of the geo- 
logical past. It is beginning to appear 
that mountain belts are longer and 
higher, earthquake activity greater, 
and a large array of other features 
more pronounced in present times 
than in an average geological period 
in the past. Furthermore, by relative 
motions between the continental land 
masses and the pole of rotation of 
the earth, it seems that climates may 
have changed rather radically in the 
recent geological past. 

This means that we must take a 
new look at theories of the origin of 
many mineral deposits, natural fuels, 
and surface deposits, so that we may 
better predict their locations and ex- 
tensions. For example, it is clear that 
the petroleum deposits in the Prudhoe 
Bay area of Alaska were formed at 
much lower latitudes, the potash 
salt deposits of Saskatchewan were 
formed closer to the equator, and the 
onset of the devastating ice ages was 
brought about by shifts in oceanic 
circulation resulting from shifting 
land masses. It is necessary to know 
these correlations if we are to under- 
stand the processes that cause the de- 
velopment of petroleum and salt de- 
posits, polar ice-caps, and many other 
resources or hazards that are of con- 
cern to man. 

Minerals — The new understand- 
ing of the down-thrusting of ocean 
floor beneath continental edges has 

led to correlations between these 
zones of downward motion and a 
superjacent distribution of certain 
types of mineral deposits. For ex- 
ample, it has been discovered that 
copper deposits of the type found in 
the southwestern United States, which 
supply most of our copper today, 
occur in belts that lie above these 
zones and that the age of emplace- 
ment of the deposits generally coin- 
cides with the time of the down- 
thrusting movement. Thus, it appears 
that the disappearance of crust, the 
development of volcanoes, and asso- 
ciated mineral deposits are tied to- 
gether by a process that involves the 
melting and fractionating of down- 
dragged materials. This has led to 
much prospecting activity in regions 
where the downward disappearance 
of crust is known from large-scale 
effects. The result has been the devel- 
opment and discovery of a number of 
new, hitherto unsuspected deposits. 

Another way of seeking new areas 
for prospecting has been the predic- 
tion of extensions of known mineral 
belts where they occurred before con- 
tinental land masses were separated. 
For example, South America fitted 
into Africa in a single supercontinent 
not too long ago, geologically speak- 
ing. (See Figure II-4) The locations 
of gold, manganese, iron, tin, ura- 
nium, diamonds, and other mineral 
deposits in Africa are much better 
known than those in South America, 
although it is expected that South 
America's mineral potential east of 
the Andean chain will eventually be 





In 1912 Wegener noted the striking similarity in the shape of the coastline of the 
Americas and of Europe and Africa. He suggested that at one time there had been 
a single supercontinent as shown in the upper left of the figure. Wegener postulated 
that the landmass broke up and allowed the continents to drift apart as shown in the 
upper right until they assumed the position of today. 

The same applies to sediment 
cumulations and potential depths of 
gas accumulation in such regions as 
the North Sea, where reserves of 
natural gas are now a substantial fac- 
tor in the economies of neighboring 

Thermal Water — An example of 
the possibility of unexpected return 
from the study of major crustal proc- 
esses is seen in the power potential of 
the thermal waters of the Salton Sea 
area in California. Lower California 
is splitting off from the mainland by 
the same process of sea-floor genera- 
tion as in the mid-Atlantic — namely, 
by the upwelling of hot rock materials 
from depth. This zone of upwelling 
and splitting apart continues up the 
Gulf of California and into the conti- 
nental region underlying the Imperial 
Valley, undoubtedly causing the great 
fault systems that have produced the 
California earthquakes. 

The heated waters resulting from 
thermal upwelling represent a great 
power potential. It is estimated, for 
example, that the power potential in 
the Salton Sea, Hungary, and other 
regions where there are large, deep 
reservoirs of heated water is of the 
same order of magnitude as the 
known oil reserves of the earth. 
Steamwells and natural geothermal 
heat have been exploited commercially 
in volcanic regions of Italy, Iceland, 
and New Zealand, and on an experi- 
mental basis in the Salton Sea area. 

as great as in comparable regions of 
Africa. Prospecting for mineral belts 
in northeastern Brazil, the Guianas, 
and southern Venezuela in areas as- 
sumed to be extensions of African 
belts has begun to disclose similar 

Petroleum and Natural Gas — Sim- 
ilarly, where continents have been 
broken apart by rifting motions with 
the development of a seaway, the new 
edges are subject to the deposition of 
shelf-type sediments. Prior to the 

understanding of continental drift, 
many of these continental shelves 
were believed to be ancient. Now it is 
known that all such new edges are 
bounded by thick sections of younger 
sediments which may have oil-bearing 
potential. This knowledge, coupled 
with the geological information pro- 
vided in anticipating depths of drilling 
as well as structures, has led major oil 
companies to undertake a worldwide 
prospecting program. The result has 
been the discovery of new areas of 
economic importance. 

Environmental Pollution 

Nature is the greatest polluter of 
the environment. Geochemical proc- 
esses have concentrated radioactive 
elements at the surface, so that man 
is constantly bombarded by a gamma- 
ray flux much larger than the average 
for the earth as a whole. Streams and 
rivers carry rock flour from the action 
of glaciers in high latitudes and 
hydrated ferric oxides, clays, and 
other debris in lower latitudes to such 
an extent that deltaic and coastal de- 
posits cause problems for shipping 



and water transport, harbors, and re- 
sort beaches. When a drainage system 
cuts through a large mineral deposit, 
it dumps its load of partially oxidized 
and soluble metal salts into down- 
stream waters. 

In order to measure environmental 
pollution and change, it is necessary 
to know the base levels of natural 
pollution and their distributions and 
dynamics in space and time. The 
response to thermal pollution in rivers 
can be predicted on the basis of ob- 
serving the ecology of warm waters 
in tropical regions. The same applies 
to oceanic waters. Natural variations 
in radioactivity provide us with sta- 
tistics on the effect of a widely 
dispersed distribution of radioactive 
wastes. Variations in the trace-ele- 
ment abundances in natural waters 
and soils give us an insight into the 
effect of these on biological systems. 
Thus, it can be said that a study of 

the geochemical and geological dis- 
tributions and processes forms a nec- 
essary base for the observation of 
perturbations to the natural levels 
and rates. 

Atmospheric Changes 

The history of climatic change, as 
different land masses approached or 
receded from the equator, has left its 
record on the ecology and on surface 
deposits. In addition, evolutionary 
change of living organisms has super- 
imposed progressive changes in the 
chemistry of the earth's surface. 
Thus, early in earth's history, great 
thicknesses of banded iron formations 
resulted from a combination of evolv- 
ing bioorganisms and the atmosphere 
of the time. 

Atmospheric change has been 
closely coupled with the evolution of 

photosynthesis processes and, more 
recently, with the nature and extent 
of land areas in the more tropical 
regions of the earth. There is good 
evidence to indicate that the partial 
pressures of oxygen and carbon diox- 
ide in the atmosphere are significantly 
different from those in the past, with 
some estimates indicating a drastic 
variation in the content of oxygen in 
particular. An understanding of the 
balance between major tropical forest 
areas, such as in the Amazon region, 
and the partial pressure of oxygen in 
the atmosphere would be of some sig- 
nificance. But precise measurements 
of the rate of change of oxygen 
partial pressure with the oxygen- 
generating living systems on land and 
in the oceans have not been made on 
a time-scale of interest to human 
existence. We therefore know little 
of the short-term effects that might 
result from a substantial change in 
human land use. 



Earthquake Prediction and Prevention 

The earthquakes that we are really 
interested in predicting are the largest 
ones, those capable of taking human 
life and causing property damage. 
Earthquakes of this size have occurred 
countless times in the past few million 
years, mostly in relatively narrow 
belts on the earth's surface. 

The destructive powers of earth- 
quakes and resulting tsunami waves 
are well known. For example, the ex- 
tremely destructive Alaskan earth- 
quake of 1964 killed about 100 people 
and caused measurable damage to 
75 percent of Anchorage's total de- 
veloped worth. The earthquake also 
generated tsunamis that caused se- 
vere damage throughout the Gulf of 
Alaska, along the west coast of North 
America, and in the Hawaiian Islands. 

A very severe earthquake in 1960 
killed approximately 2,000 people in 
Chile and rendered about a half mil- 
lion people homeless. Property dam- 
age was estimated to be about $500 
million. Tsunami damage from this 
earthquake occurred along the shores 
of South America, certain parts of 
North America (principally southern 
California), the Hawaiian Islands, 
New Zealand, the Philippines, Japan, 
and other areas in and around the 
perimeter of the Pacific Ocean. About 
$500,000 damage was suffered by the 
southern California area, while about 
25 deaths and $75 million damage 
were suffered by the Hawaiian 
Islands. The Philippines incurred 
about 32 deaths. Japan sustained ap- 
proximately $50 million damage. 

Earthquake Zones — There are two 
catastrophe-prone zones (see Figure 
II-5) : first, a region roughly encom- 
passing the margin of the Pacific 
Ocean from New Zealand clockwise 
to Chile, including Taiwan, Japan, 

and the western coasts of Central and 
South America; and second, a roughly 
east-west line from the Azores to 
Indonesia and the Philippines, includ- 
ing Turkey and Iran and the earth- 
quake zones of the Mediterranean, 
especially Sicily and Greece. 

The parts of the United States with 
a history of severe earthquake inci- 
dence are the Aleutians, south and 
southeastern Alaska, and the Pacific 
coast of continental United States. 
The two worst earthquakes of the 
twentieth century in this country 
were the "Good Friday" quake near 
Anchorage, noted above, and the San 
Francisco quake of 1906. In terms 
of energy release, the 1964 shock may 
have been two or three times as 
potent as that of 1906. 

Statistical Generalities — The prob- 
lem of earthquake prediction is closely 
related to statistical studies of earth- 
quake occurrence. Such studies en- 
able us to make the following gen- 

1. Somewhere on the earth there 
will be a catastrophic earth- 
quake, one capable of causing 
death in inhabited areas, on the 
average of between 2 and 100 
times a year. Greater precision 
is not possible, since a strong 
earthquake in a sparsely popu- 
lated area will create no major 
hazard, while the same earth- 
quake in a densely populated 
region may or may not cause 
loss of life, depending on how 
well the buildings are con- 

2. In any given region in the 
earthquake-prone zones, a cata- 
strophic shock will occur on 
the average of once per so 

many years, depending on the 
size of the region and how 
active it is. 

But statistical prediction of this 
sort is unsatisfactory for an inhabi- 
tant of a specific region. This person 
is most concerned with his own region 
and with a time-scale of much less 
than 100 years. This person probably 
needs several months' advance notice 
of an impending earthquake, although 
we are nowhere near that goal. 

Even if it were possible to predict 
an earthquake to the nearest minute 
or hour, major sociological problems, 
of the sort associated in the United 
States with civil defense, would need 
to be solved. What kind of warning 
system should there be? How does 
one handle the dispersal of the crowds 
involved in possible mass exodus? 
And what would be the reaction of 
the public if predictions failed to 
prove out in, say, 25 percent of the 

Why Earthquakes Occur 

The occurrence of earthquakes in- 
volves the physics of friction. Accord- 
ing to the modern theory of rigid-plate 
tectonics, the earth's surface is cov- 
ered with a small number of relatively 
rigid, large plates all in motion rela- 
tive to one another. At some lines of 
contact between two plates, the plates 
are receding from one another and 
surface area is being created by the 
efflux of matter from the earth's in- 
terior. Along other lines, plates are 
approaching one another, area is being 
destroyed, and surface matter is being 
returned to the interior. Along a third 
class of contacts, area is neither cre- 
ated nor destroyed, and the relative 
motions are horizontal. 




Earthquakes occur in well-defined zones where the plates adjoin. The character 
of the earthquakes varies with the nature of the plates' contact. 

However steady the slow motion 
of the plates at their centers, the 
motions are not steady at their edges. 
As the giant plates move relative to 
one another, they rub against their 
neighbors at their common edges. 
The friction at the edges seizes the 
plates and allows the accumulation of 
stress at the contact. When the stress 
at these contacts exceeds the friction, 
the contact breaks, a rupture takes 
place, and an earthquake occurs. (See 
Figure II-6) 

A map of earthquake locations, 
therefore, is actually a map of the 
plates, and the character of earth- 
quakes varies with the nature of the 
plates' contact. The character of the 
earthquakes in the Aleutians and 

along the San Andreas Fault of Cali- 
fornia are significantly different, for 
example: the first is a zone of com- 
pression with surface area being con- 
sumed, while the second is a zone of 
relative horizontal motions with con- 
servation of area. 

Approaches to Earthquake 

The prediction problem is, there- 
fore, the problem of finding a way to 
determine the first breakage of the 
frictional contact between two plates 
at a particular point along the plate 
boundary. This problem can be ap- 
proached by three methods: a search 
for premonitors, stress measurement, 
and historical studies. 

Search for Premonitors — When 
solids approach the breaking point, 
they enter a nonlinear regime of plas- 
tic deformation in which the physical 
properties of the materials change 
markedly. Although the stresses con- 
tinue to accumulate at a constant rate, 
the strains increase greatly prior to 
fracture. Indeed, in some cases, much 
of the deformation observed in earth- 
quakes is not associated with abrupt 
displacements in rupture but is due 
to "creep" — i.e., plastic deforma- 
tion — certainly occurring after, and 
probably occurring before, the shock. 
Pre-shock creep has been observed in 
laboratory experiments on fracture 
and has been reported by Japanese 
seismologists prior to some Japanese 






^* — 







This figure depicts an area where the Pacific plate (east of the Tonga trench) meets 
with the India plate, pushing the lithospheric mass of the Pacific plate downward 
forming the Tonga trench. Earthquakes take place all along this zone, closer to the 
surface near the trench and at progressively greater depth beneath the continental 
(India plate) mass. 

Changes in the rate of strain are 
another potential premonitor. These 
changes would be accompanied by an 
increase in the rate of occurrence of 
microearthquakes — i.e., very small 
earthquakes that are indicators of 
"creaking." The U.S. program for 
earthquake prediction has a strong 
component devoted to the problem of 
detecting changes in rates of strain 
along parts of the San Andreas fault 
system, including triangulation and 
leveling, tilt, distance measurements, 
and microearthquake observations. 
Some intermediate-sized earthquakes 
have been preceded by observed in- 
creases in rates of microearthquake 
activity and by increases in the strain 
rate, as measured by changes in the 
lengths of reference lines drawn 
across known faults and by changes 
in the tilt rate. 

Other physical properties in the 
vicinity of earthquake faults may 
change prior to rupture. These in- 
clude magnetic susceptibility, elec- 
trical resistivity, and elastic-wave 
velocities. There is one, as yet un- 

duplicated, example of a Japanese 
earthquake preceded by major changes 
in the local magnetic field. A minute 
change in the magnetic field has also 
been noted in the neighborhood of 
one part of the San Andreas Fault 
about one day before each of several 
microearthquakes occurred. Changes 
in the other properties have been ob- 
served in laboratory experiments on 
rock fracture but have not been veri- 
fied in earthquake examples. 

Stress Measurement — There is 
considerable debate about the values 
of the critical stress required to cause 
rupture. Seismological estimates place 
the stress drop at about 10 to 100 
atmospheres (bars). Laboratory ex- 
periments show the stress drop to be 
perhaps one-fourth the shear stress 
across the frictional surface, although 
there appears to be some seismologi- 
cal evidence that the fractional stress 
drop rises with increasing earthquake 
magnitude. In any event, the overbur- 
den pressure should be enough to 
seal faults shut, and no earthquakes 
should occur below about 2 kilome- 

ters. But earthquakes do occur below 
this depth. Thus, one must find some 
reason why friction at depth is re- 
duced. One way of doing so is to 
invoke the role of water as an impor- 
tant lubricant: that is, rocks lose 
some or all of their shear strength 
when interstitial water is raised in 

No major progress has yet been 
made on in situ measurement of shear 
stress and determination of pore 
water pressure and temperature (to 
determine critical shear rupture 
stress). In principle, direct stress 
measurement may be the simplest 
way to predict earthquakes, but it 
may also be the most difficult to 
effect in practice. 

Historical Method — In this case, 
we ignore the physics of the earth- 
quake mechanism in large part, and 
concentrate instead on the history of 
earthquake occurrence (seismicity) as 
a mathematical sequence. We can 
then investigate this historical se- 
quence for regularities — if any are 
present. The search may take two 
forms: (a) a search for triggering 
effects — i.e., a tendency for earth- 
quakes to occur at certain preferred 
times; and (b) a search for organiza- 
tion within a local catalog. 

Triggering is a cross-correlation 
problem in which two time-series are 
compared, one of which is the catalog 
or compilation of the earthquake his- 
tory for a particular region. No sig- 
nificant triggering effects have yet 
been found, although the earth tides 
should be the most likely candidate. 
In a number of cases, earthquake 
activity at a distance from a given 
region seems to be reduced following 
a large shock. However, this effect 
may be "psychoseismological": that 
is, seismologists are more likely to 
report aftershocks in an active area 
and to neglect reporting for other 
areas. Furthermore, the occurrence 
of a large shock in one region will 
reduce the tendency for another to 
occur in the same region, and will 



increase the likelihood of a shock 
occurring in a neighboring region 
over a time-scale of several years. 

Organization is an auto-correlation 
problem — i.e., one must search for 
predictive elements in the time-series 
of shocks for a given region within 
the series itself, without benefit of 
comparison with other time-series. 
Although a given earthquake catalog 
does not appear to be wholly random, 
the "signal-to-noise" ratio is small. 
The differences from randomness are 
small, and the problems of extracting 
the organized part from the random 
part has not yet been solved. 

The Limits of Prediction 

dictive capabilities for the second and 
succeeding large shocks, after the 
next one has occurred, will be much 
better on all accounts, for we will 
then know what we are looking for. 

In Stable Regions — Problems of 
prediction are difficult enough in 
regions of high activity such as the 
circum-Pacific belt. They are almost 
impossible in regions with little or no 
history of seismicity. For example, 
the region from the Rocky Mountains 
to the Atlantic Coast is supposed to 
be stable; yet two of the greatest 
earthquakes in U.S. history occurred 
east of the Rockies. Destructive 
earthquakes of record occurred in 
southeastern Missouri in 1811 (the 
shock was felt over an area of two 

million square miles; it relocated the 
Mississippi River) and near Charles- 
ton, South Carolina, in 1886. 

Seismic-risk studies show the New 
York area to have a hazard roughly 
100 times smaller than southern Cali- 
fornia. Does this mean that the larg- 
est shocks on the southern California 
scale would recur in the New York 
area at an interval of 10,000 years? 
Or are the largest possible shocks for 
the New York area less than the larg- 
est for southern California? The 1811 
and 1886 experiences show that stable 
regions are not immune. But we still 
have no way of determining where in 
stable United States a great earth- 
quake is likely to occur — or when. 
(See Figure II-7) 

In Active Areas — All three meth- 
ods of prediction in active areas share 
one major difficulty: even in Cali- 
fornia, where most U.S. activity in 
prediction research is concentrated, 
the rate of occurrence of truly large 
shocks is small. 

We have not had a great earth- 
quake in California since careful seis- 
mological records began to be kept. 
The three great historical shocks — 
San Francisco (1906), Lone Pine, or 
Owens Valley (1872), and Fort Tejon, 
near Los Angeles (1857) — all oc- 
curred in earlier times. The historical 
method postulates that the order of 
small- and intermediate-sized shocks 
can be used to predict when large 
shocks will occur. However, seis- 
mologists do not really know what 
they are looking for, since a large 
shock has not taken place in the 
modern era of California seismology. 

The same criticism applies to the 
other two methods. In the search for 
premonitory effects and the measure- 
ment of in situ stress, the presumption 
is that the anomalous, or critical, states 
will be obtained for the large shock 
by studying these states for the small 
or intermediate shocks. Whether this 
is correct or not will be seen after 
the next large shock. Indeed, our pre- 







This figure delineates the areas where earthquakes have occurred and have caused 
damage within the United States. The range is from areas of no damage in southern 
Texas and Florida to areas of major damage such as the western coast of California. 
Intensities are measured from to 8 in terms of the Richter scale. 


Minimizing Earthquake Damage 

About 10,000 people a year die as 
a consequence of earthquakes. Most 
of them live in underdeveloped parts 
of the world, where housing is not 
well constructed; indeed, the United 
States and Canada may be among the 
few places in the earthquake zones of 
the world where building construction 
is even slightly seismo-resistant, be- 
cause reinforcing steel is used in 
public buildings and wood framing 
in private residences in the seismic 

Much more needs to be done, how- 
ever. Structural engineers can now 
determine the response of a building 
to a given excitation with reasonable 
accuracy. The basic problem remains 
that of knowing what the ground 
motion will be in a large earthquake, 
so that appropriate building standards 
can be established. Some measure- 
ments of ground motion in earth- 
quakes of intermediate size are avail- 
able, but there are no good records 
for large shocks. In California, the 
next great shock may cause property 
damage amounting to billions of dol- 
lars. Loss of life may well be in the 
thousands. Some of this hazard can 
be reduced if appropriate changes in 
the building codes for new con- 
struction are made with the aim of 
minimizing casualty from great earth- 

Until now, there has been severe 
disregard of the earthquake hazard. 

Tracts of homes are built within a 
few feet of the trace of the 1906 San 
Francisco earthquake, for example. 
This section of the fault has remained 
locked since l q 06, but some creep 
has recently been observed. Accelera- 
tion of the creep could imply a sig- 
nificant hazard in an important urban 

"Man-Made" Earthquakes 

Some natural earthquakes have 
been triggered by man. The trigger- 
ing agents have included underground 
nuclear explosions, the filling of dam 
reservoirs, and the injection of water 
into porous strata. In all cases, the 
earthquakes occurred near the trigger- 
ing agent. In all cases, the energy re- 
leased in the earthquake was already 
stored in the ground from natural 

The water-injection case that oc- 
curred near Denver, Colorado, is of 
considerable interest. In that case, it 
can be surmised that the water in- 
jected into a shallow well at the Rocky 
Mountain Arsenal between 1962 and 
1965 lowered the friction on a pre- 
existing fault and allowed a series of 
earthquakes to be initiated. The oc- 
currence of shocks was correlated to 
the pumping history in the well. They 
showed an increasing migration with 
time and an increasing distance from 
the well — all this in a region with no 
previous history of earthquakes. In 

this case, the water seen 

acted as a lubricant to reduce 

tion. The migration of the shocks 

was due to stress propagation by 

concentrations at ends of ruptured 


Can Great Earthquakes be 

Although these earthquakes were 
triggered by man in his usual way of 
modifying the environment without 
thought for the consequences, the 
experience in Colorado prompts an 
interesting speculation. Suppose, for 
example, one were to envision the 
following situation some years from 
now: Pumping stations are located 
astride all the major earthquake zones 
of the world. They serve to raise the 
water pressure on the fault surfaces 
several kilometers below the surface, 
thereby reducing the friction. The 
large plates are thus lubricated and, 
without the friction at their edges, 
they move at faster rates than at pres- 
ent, releasing the accumulated stress 
in a series of small, harmless earth- 
quakes and avoiding the human toll of 
destructive, catastrophic earthquakes. 

There are many years of research 
between the first bit of serendipity at 
Denver and this fantasy, however. In 
the meantime, work on the prediction 
problem must go ahead until the solu- 
tion to the prevention problem makes 
prediction gladly meaningless. 



Volcanoes and Man's Environment 

In many parts of the world vol- 
canoes are an important part of man's 
environment. They are usually con- 
sidered destroyers. But although vol- 
canoes do a great deal of damage, and 
have taken many thousands of lives 
over the past few centuries, they are 
benefactors in the long run. 

Volcanic regions, especially those 
in which the surface has been covered 
with volcanic ash, tend to be very 
fertile. The effect is most marked 
in tropical regions where leaching 
rapidly removes plant nutrients from 
the upper part of the soil; there, new 
ash falls restore the lost materials. A 
close correlation between population 
density and soil type has been shown 
in Indonesia, for example, with by 
far the densest populations in areas 
where very young or still active vol- 
canoes have added ash to the soil. 
World over, the agricultural popula- 
tion clusters in the most fertile re- 
gions; it is likely to do so increasingly 
as population grows and food supplies 
become less adequate. Yet some of 
the most fertile areas, close to active 
volcanoes, are the most subject to 
volcanic destruction. Furthermore, 
volcanoes that have been quiet for 
centuries may still be active and may 
erupt again. In order to continue to 
make use of these badly needed rich 
agricultural areas close to volcanoes, 
we must learn to forecast volcanic 
activity, and to deal with it when it 

A Brief Overview 

Volcanoes are places where molten 
rock or gas, or usually both, issue at 
the surface of the earth. As the 
molten rock, known as magma, rises 
from depth, it contains dissolved 

gases; but as the magma enters zones 
of lesser pressure near the earth's sur- 
face, some of the gas comes out of 
solution and forms bubbles in the 
liquid. The bubbles tend to escape 
from the magma, but in order to do 
so they must move to the upper sur- 
face of the liquid and rupture the 
surface. When the viscosity of the 
magma is relatively low — as it is, 
for example, at Kilauea Volcano, in 
Hawaii — the bubbles escape easily; 
but when the viscosity is high, they 
escape less readily and accumulate in 
the magma instead, their size and 
pressure increasing until they are able 
to burst their way free. This produces 
an explosion. Thus, volcanic erup- 
tions may consist of a relatively gentle 
outwelling or spurting of molten rock, 
which flows away from the vents as 
lava flows, or of violent explosions 
that throw shreds of the molten rock 
or solid fragments of older rock high 
into the air, or of any mixture of 
the two. 

The fragments thrown out by ex- 
plosions are known as pyroclastic 
material. The large fragments are 
bombs, blocks, scoria, or cinder; the 
sand- to dust-size material is called 
volcanic ash. Some eruptions dis- 
charge mostly gas; and gas is given 
off, sometimes copiously, by many 
volcanoes between eruptions. 

The fact that a volcano has not 
erupted for centuries does not make 
it less dangerous. We have many ex- 
amples of volcanoes that have been 
dormant for hundreds of years, only 
to return to life with catastrophic 
eruptions. At the beginning of the 
Christian era, Vesuvius had been quiet 
for hundreds of years; but in a.d. 79 
it erupted, destroying all the agricul- 
tural land on its flanks and close to its 
base, and the cities of Pompeii, Her- 

culaneum, and Stabia. The greatest 
eruption of recent years, at Kam- 
chatka in 1956, took place at a long- 
inactive volcano that had been given 
so little attention that it had not even 
received a name. The name we use 
for it today, Bezymianny, means "no 
name." Many other examples could 
be given, including that of Arenal, in 
Costa Rica, in 1968. 

Within the U.S., the active vol- 
canoes of Hawaii and Alaska are well 
known. Familiar, too, is the line of 
great volcanic mountains along the 
Cascade Range, from northern Wash- 
ington into northern California. Al- 
though the latter are not usually con- 
sidered to present any volcanic risk, 
they really do. Several eruptions have 
taken place in the Cascade Range in 
the past 170 years, the latest at Lassen 
Peak, California, during the years 
1914 to 1919. Six thousand years ago 
a tremendous eruption at the site of 
the present Crater Lake, in Oregon, 
covered hundreds of thousands of 
square miles with ash and devastated 
the area immediately around the 
mountain. Other Cascade volcanoes 
may behave similarly in the future. 
Several appear to be in essentially the 
same state as Mt. Mazama, at Crater 
Lake, before its great eruption. 

Lava Flows 

Streams of liquid rock are lava 
flows. Where the magma has low 
viscosity and the supply is large, a 
lava flow may spread for tens of 
miles. Some flows in the Columbia 
River lavas of Washington and Ore- 
gon have been traced for distances of 
more than 100 miles and over areas 
of more than 10,000 square miles. 
Since 1800, lava flows on the island 



of Hawaii have covered more than 
300 square miles of land surface. 
Much of this was unused land on the 
upper slopes of the mountains, but in 
1955 a large part of the six square 
miles buried by lava was prime agri- 
cultural land. Again in 1960, several 
hundred acres of rich sugar land were 
covered. On the other hand, the lava 
built out the shoreline of the island, 
creating half a square mile of new 

The land buried by lava flows is not 
lost forever. The rapidity with which 
vegetation reoccupies the lava surface 
varies greatly with climate. In warm 
areas of high rainfall, plants move in 
quickly. The lava flows of 1840, on 
the eastern end of the island of 
Hawaii, are already heavily vegetated. 
In dry or cold areas the recovery is 
much less rapid. 

It has been found that simply 
crushing the surface of the lava, as by 
running bulldozers over it, greatly 
speeds reoccupation by plants, appar- 
ently because the crushed fine mate- 
rial retains moisture. Certain types of 
plants can be successfully planted on 
a surface treated in this way within a 
few years of the end of the eruption. 
In 1840, Hawaiians were found grow- 
ing sweet potatoes on the surface of 
a lava flow only about seven months 
old. Experimentation with ways of 
treating the flow surface and with 
various types of plants will probably 
make it possible to use many flow 
surfaces for food crops within two 
years of the end of the eruption. 

Methods for the Diversion of Lava 
Flows — Several methods have been 
suggested. In 1935 and 1942, lava 
flows of Mauna Loa, Hawaii, were 
bombed in an effort to slow the ad- 
vance of the flow front toward the 
city of Hilo. The results indicated 
that, under favorable circumstances, 
the method could be successful. They 
also indicated, however, that not all 
lava flows could be bombed with use- 
ful results. 

It has been suggested that lava 
flows can be diverted by means of 
high, strong walls, not in order to 
stop the flow but only to alter its 
course. Walls of this sort, although 
poorly planned and hastily built, were 
successful to a limited degree during 
the 1955 eruption. Walls built during 
the 1960 eruption were of a different 
sort, designed to confine the lava 
like dams rather than to divert it. 
Although the lava eventually over- 
topped them, they appear to have 
considerably reduced the area de- 
stroyed and were probably respon- 
sible for the survival of a large part 
of a beach community and a vitally 
important lighthouse. 

Whether such walls would be effec- 
tive against the thicker, more viscous 
lava flows of continental volcanoes is 
not known. Thick, slow-moving lava 
flows at Paricutin Volcano, in Mexico, 
did not crush the masonry walls of a 
church that was buried by the lava 
to roof level. Walls generally would 
be useless against lava of any vis- 
cosity where the flow is following a 
well-defined valley. Fortunately, the 
flows of continental volcanoes usually 
are shorter and cover less area than 
Hawaiian flows, thus reducing the 
area of risk. Much more research is 
needed on ways to control lava flows. 

Ash Falls 

It was formerly believed that ash 
from the great explosion of Krakatoa 
Volcano, between Java and Sumatra, 
drifted around the earth three times 
high in the stratosphere. Although it 
now appears that the brilliant sunsets 
once regarded as evidence of this were 
probably caused instead by an aerosol 
of sulfates resulting from interaction 
of volcanic sulfur dioxide gas and 
ozone, it has been repeatedly demon- 
strated that violent eruptions may 
throw volcanic ash high into the 
upper atmosphere, where it may drift 
for hundreds of miles. For instance, 
ash from the 1947 eruption of Hekla, 
in Iceland, fell as far away as Mos- 

cow; ash from the eruption of Qui- 
zapu, in Chile, fell at least as far away 
as Rio de Janeiro, 1,850 miles from 
the volcano; and ash from the Crater 
Lake eruption has been traced as far 
as central Alberta. 

Although it has not been absolutely 
proved, it appears probable that large 
amounts of ash in the atmosphere 
affect the earth's climate. Ash from 
the 1912 eruption of Mt. Katmai, 
Alaska, is believed to have reduced by 
about 20 percent the amount of solar 
radiation reaching the earth's surface 
at Mt. Wilson, in southern California, 
during subsequent months; ash from 
the Laki eruption in Iceland drifted 
over Europe and appears to have 
caused the abnormally cold winter of 
1783-84. Other examples have been 
cited, although some investigators 
find no evidence for it. 

Heavy ash falls may destroy vege- 
tation, including crops, within a radius 
of several miles around the volcano. 
Ash from the Katmai eruption de- 
stroyed small vegetation at Kodiak, 
100 miles away, although bigger trees 
survived. During the 1943 eruption 
of Paricutin, even the big trees were 
killed where the ash was more than 
three feet deep. Even a few inches of 
ash will smother grass. 

Serious indirect consequences may 
arise. A great famine that resulted 
from destruction of vegetation and re- 
duction of visibility to the point 
where the fishing fleet could not work 
followed the Laki eruption and is said 
to have killed a large proportion of 
the population of Iceland. Around 
Paricutin, thousands of cattle and 
horses died, partly of starvation and 
partly from clogging of their digestive 
systems from eating ash-laden vege- 
tation. Even if it causes nothing 
worse, ash-covered vegetation may 
cause serious abrasion of the teeth of 
grazing animals. Cane borers did 
serious damage to sugar cane in the 
area west of Paricutin, because the 
ash had destroyed another insect that 
normally preyed on the borers. Any 



disturbance of the natural regime may 
have surprising results! 

Direct damage to fruit and nut trees 
can be reduced by shaking the ash 
from the branches; and collapse of 
roofs of dwellings under the weight 
of ash can be reduced by shoveling or 
sweeping off the ash. Much addi- 
tional research is needed on ways to 
reduce other damage from ash. 

Light ash falls are beneficial. The 
ash acts as a mulch, and helps to 
retain water in the soil for plant use 
and to supply needed plant foods. 
Within a few months after the erup- 
tion, areas covered with a thin layer 
of ash commonly look as though they 
had been artificially fertilized. The 
fertility probably could be further in- 
creased by proper treatment of the 
ash-covered ground. 

Fragmental Flows 

Glowing avalanches ("nuees ar- 
dentes") are masses of red-hot frag- 
ments suspended in a turbulent cloud 
of expanding gas. The main portion 
of the mass travels close to the 
ground and is closely guided by 
topography, but above it is a cloud 
of incandescent dust that is much less 
restricted in its spread. The ava- 
lanches are exceedingly mobile; they 
may travel as fast as 100 miles an 
hour. Some glowing avalanches are 
caused when large volumes of hot 
debris are thrown upward nearly ver- 
tically by explosions and then fall 
back and rush down the slopes of 
the volcano. This happened, for in- 
stance, on the island of St. Vincent, 
in the Lesser Antilles, in 1902. The 
results were disastrous; thousands of 
people died. The glowing avalanches 
of Mt. Pelee, Martinique, in the same 
year, appear to have originated from 
low-angle blasts at the edge of a 
steep-sided pile of viscous lava (a 
volcanic dome) that grew in the crater 
of the volcano. They devastated the 
mountain slopes, destroyed the city of 
St. Pierre, and took over 30,000 

human lives. Still other glowing ava- 
lanches result from collapse of the 
side of the dome after it has grown 
beyond the crater, or from collapse 
of thick lava flows on the slope of 
the volcano. Those formed by col- 
lapse of a summit dome are common 
on Merapi Volcano, in Java. 

The association of glowing ava- 
lanches with domes is so common 
that any volcano on which a dome 
is growing or has grown should be 
suspect. Particularly where a growing 
dome has expanded onto the outer 
slope of the volcano, the area down- 
slope is subject to glowing avalanches 
and probably should be evacuated 
until some months after the dome has 
stopped growing and achieved appar- 
ent stability. 

Glowing avalanches are guided by 
existing valleys, and their courses can 
be predicted to some extent. The 
upper parts of big ones may override 
topographic barriers, however. St. 
Pierre was destroyed by the upper 
part of a big avalanche that continued 
over a ridge while the main mass of 
the avalanche turned and followed a 

Ash flows resemble glowing ava- 
lanches in being emulsions of hot 
fragments in gas. They are also ex- 
ceedingly mobile and travel distances 
as great as 100 miles or more so 
rapidly that, when they finally come 
to rest, the fragments are still so hot 
they weld themselves together. An 
historical example occurred in the 
Valley of Ten Thousand Smokes, 
Alaska, in 1912. Older ones cover 
many thousands of square miles in 
western continental United States. A 
fairly recent example is the Bishop 
tuff in California. 

The great speed of glowing ava- 
lanches and ash flows probably makes 
effective warning impossible once 
they have started; and their great 
mobility and depth appears to make 
control by means of walls unfeasible. 
The only hope of averting future dis- 

asters seems to be in recognizing the 
existence of conditions favorable to 
their generation, and issuing a long- 
range warning in advance of their 
actual initiation. 

Mudflows are slurries of solid frag- 
ments in water. Not all of them are 
volcanic, but volcanic ones (lahars) 
are common. They may be either hot 
or cold, and they may originate in 
various ways: by the ejection of the 
water of a crater lake, by rapid melt- 
ing of ice or snow, or, most com- 
monly, by heavy rains. The water 
mixes with loose pyroclastic or other 
debris on the sides of the volcano 
and the mud rushes downslope, with 
speeds of up to 60 miles an hour, 
sweeping up everything loose in its 
path. In the last several centuries, 
mudflows have probably done more 
damage, and taken more lives, than 
any other volcanic phenomenon. 
They were, for instance, the principal 
cause of damage during the 1963 
eruption of Irazu, in Costa Rica. 

At Kelut Volcano, in Java, explosive 
eruptions repeatedly ejected the water 
of the crater lake, causing mudflows 
on the flanks that took thousands of 
lives and destroyed plantations and 
rice paddies in the rich agricultural 
area near the base of the volcano. 
In 1919 alone, an area of 50 square 
miles of arable land was buried and 
about 5,100 persons were killed. In 
an effort to improve the situation, 
Dutch engineers drove a series of 
tunnels through the flank of the vol- 
cano and lowered the level of the 
crater lake to the point that the vol- 
ume of water remaining would be 
insufficient to cause big mudflows. 
This was effective. During the big 
eruption of 1951 only seven persons 
were killed, all on the upper slopes 
of the volcano, and no damage was 
done to the agricultural land at the 
base. The eruption destroyed the 
tunnel entrances, however, and they 
were not reconstructed in time to pre- 
vent a new disaster in 1966. A new 
tunnel, completed in 1967, has again 
drained the lake to a low level. As 



Indonesian authorities are well aware, 
the present menace on Kelut is in- 
creasing as a result of the steady 
increase of population on the fertile 
flanks of the volcano. 

In Java, attempts were made to 
warn of hot mudflows by installing 
thermal sensors in the upper parts of 
the valleys on the slopes of volcanoes, 
with an electrical alarm system in 
villages on the lower slopes. It was 
hoped that the villagers would have 
time to reach high ground before the 
mudflow arrived. In places, artificial 
hills were built to serve as refuges. 
The alarms were unreliable, however, 
and did not work at all for cool 

Mudflows, being essentially streams 
of water, are closely controlled by 
topography, and it is possible to an- 
ticipate which areas are most threat- 
ened. Dams built to try to contain 
the mudflows from Kelut failed when 
the small reservoirs behind them be- 
came overfull. It might be possible, 
however, in some favorable localities, 
to use diversion barriers like those 
suggested for Hawaiian lava flows. 
In general, the best possibility seems 
to be to learn to recognize the situa- 
tions most likely to lead to mud- 
flows, and issue warnings when these 


The most abundant gas liberated at 
volcanoes is water. Less abundant 
are carbon gases, sulfur gases, am- 
monia, hydrogen, hydrochloric acid, 
and hydrofluoric acid. Sulfur dioxide 
and sulfur trioxide unite with water 
to form sulfurous and sulfuric acids. 

The acid gases may be injurious to 
plants downwind from the volcano. 
Mild gas damage resembles smog 
damage in cities. More severe damage 
causes fruit to drop and leaves to turn 
black and fall; it may kill the plant. 
Serious damage of this sort has been 

experienced on coffee plantations to 
the lee of the volcanoes Masaya, in 
Nicaragua, and Irazu, in Costa Rica, 
and less severe damage has occurred 
in Hawaii. 

Suggested countermeasures have 
included trapping the gases at the 
vents in the volcanic crater and dis- 
charging them at higher levels in the 
atmosphere by means of a high flue, 
or precipitating them by means of 
chemical reactions. Valuable chemi- 
cals might be recovered in the process. 
Local application of chemicals directly 
on the plants in order to neutralize 
the acids has been tried, but this is 
expensive and not wholly effective. 
Further research on this subject is 

Predicting Eruptions 

Accurate prediction of time, place, 
and nature of volcanic eruptions would 
go far toward eliminating the dis- 
asters that arise from them. How- 
ever, although some progress has 
been made in this direction, we are 
still a long way from being able to 
make accurate predictions. The indi- 
cations that have been used to predict 
time and place of eruptions are: earth- 
quakes, swelling of the volcano, 
change of temperature or volume of 
gas vents (fumaroles) or hot springs, 
changes of elevation in areas near 
the volcano, and opening or closing 
of cracks in the ground. 

Tumescence — Scientists of the 
Hawaiian Volcano Observatory have 
found that Kilauea Volcano swells up 
before eruptions and shrinks once the 
eruption has started. However, the 
tumescence may continue for months, 
or even years, before eruption finally 
takes place; furthermore, it sometimes 
stops and detumescence occurs with- 
out any eruption. (The magma may 
be drained away by intrusion into the 
subsurface structure of the volcano.) 
Tumescence, therefore, does not indi- 
cate when an eruption will occur, but 

only that the potential for eruption 
is present. 

Earthquakes — Some eruptions are 
preceded by swarms of shallow earth- 
quakes over periods of a few hours 
or days. These, combined with the 
swelling of the volcano, are the most 
useful short-range tool for prediction. 
The eruption of Vesuvius in a.d. 79 
was preceded by ten years of very 
frequent earthquakes, and with our 
present knowledge we could probably 
have made a general long-range pre- 
diction that the volcano was likely to 
erupt, though we still probably could 
not have said just when. Other erup- 
tions appear to have had no definite 
seismic prelude. 

Upheavals and Cracks — Marked 
swellings or upheavals have taken 
place before eruptions at some vol- 
canoes, though more commonly none 
has been detected. This may be partly 
because of lack of appropriate instru- 
ments in proper positions. Upheaval 
of the land causes the shoreline in 
the vicinity of Naples to shift seaward 
a few hours or days before some erup- 
tions of Vesuvius. A similar upheaval 
preceded the eruption of Monte 
Nuovo, in the Phlegrean Fields north- 
west of Naples, in 1538. In 1070 the 
region was again being upheaved, 
with the opening of cracks and in- 
crease of fumarolic action in the 
nearby crater of Solfatara Volcano; 
these things suggested strongly that 
an eruption would take place in the 
area soon. 

Tilt Tatterns — In 1943, at Showa 
Shin-Zan, in Japan, the ground sur- 
face was pushed up to form a bulge 
150 feet high and 2V2 miles across 
before the eruption finally started. 
The 1960 eruption of Manam Vol- 
cano, near New Guinea, was preceded 
by a large number of earthquakes 
and tumescence that resulted in tilting 
of the ground surface through an 
angle ranging from 8 to 18 seconds of 
arc. Tilting of the ground surface has 
been observed before eruptions at 
some other volcanoes, but it has not 



been found commonly. In retrospect, 
workers at Nyamuragira Volcano, in 
central Africa, believed that swarms 
of earthquakes would have made it 
possible to predict the 1958 eruption 
about 30 hours before the outbreak, 
but no such prediction was made. At 
most volcanoes, including most of 
those in western United States, the 
instrumental installations necessary to 

recognize either earthquake preludes 
or diagnostic tilt patterns are still 

History — The prediction of the 
type of eruption rests almost wholly 
on a knowledge of the past history of 
the volcano. What has happened be- 
fore is most likely to happen again. 
In most instances, however, the his- 

tory must be deduced from careful 
geological studies, and we still do not 
know the history of most of the 
earth's volcanoes. 

Clearly, it will be some time before 
we can consistently predict eruptions 
at most volcanoes, including those in 
some of the most heavily populated 

Aspects of Volcanic Science 

Giant strides have been made in 
our understanding of the dynamics 
of the earth's surface and of the be- 
havior of rock systems at pressures 
and temperatures equivalent to sub- 
crustal conditions within the earth. 
Yet our knowledge of the basic 
physics and mechanisms involved in 
volcanic processes are at best sketchy, 
our explanations speculative and 
largely qualitative, and our predic- 
tions based on observed history rather 
than fundamental understanding of 
the real mechanics involved. 

The number of scientists conduct- 
ing serious investigations of volcanoes 
is fairly small; they are concentrated 
in the countries where most of the 
earth's 450 active volcanoes are 
found — the circum-Pacific belt (New 
Zealand, the Philippines, Japan, east- 
ern Soviet Union, Alaska, and western 
North and South America) and an 
east-west region extending from Java 
through the Mediterranean. Most of 
today's students are Japanese, Ameri- 
can, Russian, Italian, Australian, In- 
donesian, or Dutch. 

Interaction with Man and 

Volcanoes are spectacular in state 
of eruption, and their effects on life 
and property have often been devas- 
tating. Damage is inflicted by several 
means: fall of fine-grained ash from 

the atmosphere; ash flows; lava flows; 
and tidal waves associated with 
violent eruptions. The most devas- 
tating and dangerous eruptions are 
those that produce ash flows or vio- 
lent blasts. These are also among the 
least understood, because the erup- 
tions are short-lived and have not 
been well studied. 

The United States has over 30 
active volcanoes, mostly in Alaska. 
(See Figure II-8) Within continental 
United States, large dormant vol- 
canoes include Mt. Rainier, Mt. Baker, 
Mt. St. Helens, Mt. Shasta, and Mt. 
Lassen. Phreatic (steam-blast) erup- 
tions occurred in Hawaii in 1924; the 
hazard grows with population density. 

Alteration of the environment near 
an erupting volcano can be dramatic. 
Some believe the decline of Minoan 
civilization on Crete (about 1500 B.C.) 
resulted from the eruption of the vol- 
cano Thera. More recently, an ash 
fall associated with the 1968 eruption 
of Cerra Negro, Costa Rica, threat- 
ened to choke off San Juan, the capital 
city. In the United States, historic 
lava flows from Kilauea and Mauna 
Loa, on Hawaii, have reached the sea, 
burying productive sugar cane fields. 
The 1959 flow accompanying an erup- 
tion along the east rift zone of Kilauea 
buried the town of Kapoho. The 1950 
flow from Mauna Loa reached the sea, 
endangering for a time the town of 

Kailua-Kona on the west side of 

Among the greatest direct threats 
to life are eruptions producing ash 
flows. A spectacular and devasting 
historic eruption of this type occurred 
on Martinique in 1902. An ash erup- 
tion from Mt. Pelee flowed down the 
flank of the mountain at an estimated 
50 to 100 miles per hour and buried 
the town of St. Pierre, with a loss of 
38,000 lives. A passing ship observed 
a similar eruption at Mt. Katmai, 
Alaska, in 1912; it produced the Val- 
ley of Ten Thousand Smokes, but 
there was no known loss of life. Such 
eruptions could recur nearly any- 
where along the Aleutian chain. An 
eruption of this type in a populated 
region would be a catastrophe. 

One of the most dramatic examples 
of the effect of volcanic action on the 
environment was the eruption of 
Krakatoa, in 1883, in eastern Sumatra. 
Krakatoa is a large, cauldron-type 
volcano. It erupted with an energy 
estimated as equivalent to 100 to 150 
megatons of TNT. Some 36,000 peo- 
ple lost their lives in this eruption and 
the tidal wave that accompanied it. 
The blast was believed to have been 
the result of sea water entering the 
magma chamber after a two-week 
period of relative quiet. The resulting 
acoustic wave produced in the atmos- 
phere propagated to the antipodes 



Figure 11-8— U.S. VOLCANOES 


Mt. SI. Helen 

Ml R ji 

Mt Hood ," Ml, Adams YX 

Mt Jefferson # « / / Columbia River 

Crato'lake •** "-' \ p| a'M" 

Mt. Shasta t 
Lassen Peak a 

This figure indicates the active volca- 
noes of the U.S. as well as Quaternary 
volcanoes and other areas of volcanic 

and back eight times as recorded by 
microbarographs around the world. 
Fine-grained ash was dispersed 
throughout the atmosphere and pro- 
duced distinctly red sunsets as far 
away as Europe. 

Long-Term Effects — Volcanoes 
may also have an important effect on 
man's environment on geologically 
long time-scales. Fine-grained air- 
borne volcanic material may have a 
serious effect on the long-term heat 
balance of the earth, for example, by 
changing the reflection properties of 
the upper atmosphere. The ash from 
Krakatoa reduced the incident solar 
flux to the surface by about 20 per- 
cent of its normal value. Such effects 
have been postulated as a possible 
contributing cause for continental 
glaciation. In this view, glaciations 
result from a reduced heat flux to the 
earth's suface as a consequence of 
fine ash in the atmosphere dispersed 
by a higher general level of volcanic 

In addition, most of the gases that 
produce the atmospheres and oceans, 
the products of outgassing of the 
earth's interior, probably reach the 
surface through volcanoes. Hence, 
the nature of volcanism is intimately 
tied to such general questions as the 
nature and evolution of planetary 

Ability to Forecast Eruptions 

Perhaps the most serious matter is 
that of predicting catastrophic and 
unexpected eruptions. Volcanic soils 
are among the most fertile in the 
world; consequently, the slopes of 
even active volcanoes are populated 
and used for agricultural purposes. 
Furthermore, the time between violent 
volcanic events varies from several 
decades to several thousand years — a 
short time geologically but a long time 
on the scale of man's life and memory. 
There is thus a significant amount of 
economic pressure to occupy hazard- 
ous places. It is virtually certain that 
violent eruptions like those at Kraka- 
toa, Vesuvius, or Mt. Pelee will occur 
in the future. 

Our ability to explain or predict 
volcano behavior is poor and restricted 
to a few isolated, well-studied ex- 

amples. The behavior of Kilauea, on 
the island of Hawaii, is one of the 
most systematically monitored and 
historically well-studied volcanoes in 
the world, along with Asama and 
Sukurajima in Japan. (In 1914, Su- 
kurajima erupted and seven villages 
were destroyed; property damage was 
some $19 million, as 25 square kilo- 
meters were buried under new lava; 
no lives were lost.) Kilauea has been 
monitored almost continuously since 
1912, the year that Jagger estab- 
lished the Hawaiian Volcano Obser- 
vatory (HVO), operated since 1917 by 
the U.S. Geological Survey (USGS). 
Integrated geological, geophysical, 
and petrological chemical observa- 
tions have been made of Kilauea's 
eruptions and the lavas produced. 
Small-scale earthquakes accompany- 
ing upward movement of molten rock 
at depth have also been studied. 
Swelling of the volcano prior to erup- 
tion has been monitored by precise 
leveling and strain measurements. All 
this has resulted in a basis for erup- 
tion prediction based on previous ex- 

The ability to predict eruptions at 
Kilauea has little use elsewhere, how- 
ever, since each volcano has it own 
personality which must be studied to 
be understood. Furthermore, Hawai- 
ian-type volcanism, while it has been 
destructive of property, is the most 
passive of all types of eruption. And, 
in spite of a long history of observa- 
tion and systematic data collection at 
Hawaii, we are still basically ignorant 
of some important and interesting 
facts: details of the melting processes 
operative in the earth's mantle that 
are responsible for the generation of 
the lava; the mechanics of the propa- 
gation of fractures in the mantle crust 
and the hydrodynamics of transport 
of the lava to the surface; the rela- 
tionship of the lava to the fragments 
of subcrustal (mantle) rocks contained 
in some lavas; the nature of the man- 
tle underlying Hawaii; and, finally, 
why the Hawaiian chain (and the ac- 
tive volcanism) is marching south- 
eastward across the Pacific. 



Diversion and Modification 
Through Technology 

Hilo, the second largest city in the 
Hawaiian Islands, lies in the bottom 
of a shallow, trough-like valley on the 
east flank of Mauna Loa and Kilauea. 
By chance, the most voluminous his- 
toric flows have occurred on Mauna 
Loa's west side and have, therefore, 
flowed away from Hilo. In 1938 and 
1942, however, lava flows erupted 
from the east side of the peak and 
proceeded downslope toward Hilo. 
The U.S. Army Air Force, acting on 
recommendations of geologists from 
the HVO, bombed lava tubes in the 
upper part of the 1942 flow, success- 
fully diverting the flow of hot lava 
from the interior of the tubes onto the 
surface of the flow and possibly slow- 
ing the forward advance of the flow's 
leading edge some fifteen miles down 
the hill. The flow did not reach Hilo. 
The effectiveness of the bombing is a 
matter of conjecture, however, since 
termination of the extrusion of lava 
from Mauna Loa occurred at about 
the same time. 

While there have been no direct at- 
tempts to alter the cycle of activity of 
any volcano, a 1.2-megaton atomic 
experiment conducted by the Atomic 
Energy Commission on October 2, 
1969, in Amchitka Island in the Aleu- 
tians, may represent — though not by 
design — the first such project. Kiska 
Volcano, on Kiska Island, erupted on 
September 12, about three weeks be- 
fore the experiment was to be held 
some 500 kilometers away. Had this 
eruption occurred three or four weeks 
later, a controversy about the possible 
cause-and-effect relationships between 
the blast and the eruption would un- 
doubtedly have ensued. The experi- 
ment on Amchitka was preceded by 
considerable debate among seismolo- 
gists about the possible effects on the 
seismicity of that part of this tec- 
tonically active island chain. Since 
seismicity and volcanism are inti- 
mately related on a worldwide basis, 
the relevant areas in the Aleutians 
should be carefully monitored for pos- 

sible alteration of the local volcanic 

Potential Sources of Basic 

It is clear that many disciplines will 
contribute to progress in volcanology 
— field geology, experimental and 
observational petrology, geophysics, 
geochemistry, fluid mechanics, and 
others. Advances in our knowledge 
of volcanic mechanisms can be ex- 
pected from detailed observations, ex- 
periments, and, eventually, theoretical 
(mathematical) models. 

Field Observations — Any signifi- 
cant advance in our knowledge and 
understanding of volcanoes must be 
observationally based. Like all geo- 
logical processes, the number of pa- 
rameters involved and the complexity 
of the physical processes are very 
great. Eruptions amount to large-scale 
and uncontrolled natural experiments. 
Meaningful quantitative data can only 
be provided by systematic observa- 
tions by prepared observers with ade- 
quate instruments in the right place at 
the right time. 

Any really basic, thorough under- 
standing of volcano mechanisms, vol- 
cano physics, and, eventually, erup- 
tion prediction will follow detailed 
observational work — both long-term 
investigations of individual volcanoes 
and ad hoc, short-term investigations 
of volcanoes in a state of eruption. 
The fruits of such observation can be 
seen at Kilauea. Extended study by 
the USGS has produced a detailed 
geological, physical, and chemical de- 
scription of this volcano. Detailed 
knowledge of the behavior of Kilauea, 
particularly prior to eruption, is 
known, and reliable eruption predic- 
tion by HVO has become routine. The 
Hawaii experience underscores two 
important points: (a) The ability to 
predict the behavior of specific vol- 
canoes is based on experience and 
careful observation over a substantial 
period of time, (b) Systematic collec- 

tion of several types of data (geo- 
physical, geological, penological, 
chemical) is required. 

Laboratory Experiments — There 
are a number of laboratory experi- 
ments that may yield useful informa- 
tion: the chemical evolution of mag- 
mas and mineralogical and chemical 
evolution with time in relation to 
eruption history are important param- 
eters to establish. Petrologic and 
chemical observation of volcanic prod- 
ucts can be closely correlated to the 
eruption history of observed (recent) 
events or to carefully reconstructed 
ones, yielding data about the evolu- 
tion of magmas that culminate in 
violent terminal activity. 

Laboratory investigation of physi- 
cal properties of lava and magmatic 
systems, especially volatile-bearing 
ones, is needed. Little is known about 
the physical characteristics of lavas 
under dynamic conditions — for ex- 
ample, expansion during rise in a vol- 
cano from depth. The formation of 
volcanic ash and catastrophic erup- 
tions are associated with inhibited 
vesiculation (bubbling) of lava during 
rapid rise to the surface. These erup- 
tions are the most destructive, and 
they are not well understood. 

Experimental petrology (investiga- 
tions of rock systems in controlled 
situations in high-pressure vessels in 
the laboratory) will yield data useful 
in the quantitative reconstruction of 
specific events as captured in rock 
textures and in mineralogical asso- 
ciations in volcanic rocks. By com- 
parison of laboratory results with ob- 
served relationships in volcanic rocks, 
much can be inferred about the his- 
tory of formation of specific volcanic 
rocks that can never be directly ob- 
served because the rocks occur too 
deep within the volcano. 

Simulation Experiments — Some 
progress could come from large-scale 
simulation of certain volcanic proc- 
esses, in much the same way as our 
understanding of meteorite-impact 



physics was greatly aided by high- 
yield atomic-explosion experiments. 
It might be feasible to simulate cer- 
tain aspects of volcanic eruptions on 
a rather large scale. Such experiments 
would yield useful information on the 
interior ballistics problem (flow within 
the interior of volcanoes) and also on 
the exterior ballistics of volcanic 
ejecta, especially ballistics of large 

Small-scale model simulation of 
volcanic processes is an exceedingly 
difficult endeavor because of the 
necessity to satisfy similitude require- 
ments for both heat and mass trans- 
fer. However, much has been learned 
in a qualitative way about other geo- 
logical processes — e.g., convection of 
the earth's mantle and motion of 
ocean currents — by such experi- 
ments. The results, while semi- 
quantitative, nonetheless can be quite 
informative, especially when closely 
tied to field observation. Model meth- 
ods could profitably be applied (and 
have been to a limited degree) to a 
number of volcanic mechanisms, such 
as the emplacement of lava and ash 

Mathematical Description — Vol- 
canic processes are complex. The 
eruption of volcanoes involves the 
flow of a fluid system from a high- 
pressure reservoir at depth to the sur- 
face through a long rough pipe, or 
conduit. In this process of fluid flow, 
heat and momentum are exchanged 
both within the system and with the 
vent walls. As the erupting medium 
rises, the confining pressure decreases 
and a number of things result — ex- 
solution and expansion of the vola- 
tile phases (gas), and cooling due to 
expansion. Near the surface, these 
processes are rate-controlled rather 
than simple equilibrium ones. 

Mathematical description of the 
hydrodynamic and heat-transfer prob- 
lems are rudimentary. There exists 
abundant literature in engineering 
and physics, however, which could 
be applied readily to a number of vol- 

canic processes. For example, in the 
last decade our knowledge of the be- 
havior of complex multi-phase sys- 
tems involving gas, solid, and liquid 
phases has advanced because of their 
importance in engineering practice 
(e.g., to determine the flow in rocket 
nozzles). General hydrodynamic 
codes for the description of the de- 
formation of material under shock 
loading have been developed to de- 
scribe target effects around explosions 
and impacts, and these codes can be 
modified to describe volcanic situa- 
tions. Further, the flow in gas and oil 
wells and reservoirs is probably simi- 
lar to the flow in some volcanoes and 
their reservoirs. Also, the interaction 
of the high-velocity stream of gas and 
fragments ejected by an erupting vol- 
cano into the atmosphere is a special 
case of the interaction of a jet with 
fluid at rest. These problems appear 
to be ripe and could develop very 

The science of petrology has pro- 
gressed very rapidly in the last dec- 
ade, to the extent that many quantita- 
tive estimates can be made regarding 
the temperatures and pressures of the 
formation of certain minerals and 
mineral assemblages found in vol- 
canic rocks. These are very important 
constraints on mathematical formula- 
tion of the eruption problem. But the 
greatest single impediment to the for- 
mation of mathematical descriptions 
of volcanoes in state of eruption is the 
lack of systematic, quantitative field 
data regarding eruption parameters 
(mass flow rate, temperature, veloci- 
ties and direction of fragments ejected, 
the abundance and chemical composi- 
tion of the gas phase, and petrog- 
raphy and chemistry of the rocks 

State of Observational 
Data and Tools 

Present data on active volcanoes 
are quite incomplete, although the 
means of acquisition of important in- 

formation are available. One reason 
data are incomplete is that, prior to 
modern jet transportation, it was 
simply impossible for qualified sci- 
entists to arrive at the scene in time 
to gather the most interesting infor- 
mation, which occurs in the first few 
hours or days of activity of many vol- 
canic events. 

For the past ten years, the Depart- 
ment of Defense and National Aero- 
nautics and Space Administration 
have applied a powerful array of re- 
mote-sensing and photographic tech- 
niques to the investigation of some 
volcanoes. The 1963 eruption of Surt- 
sey, in Iceland, was studied, for ex- 
ample. These methods hold great 
promise and if applied to the study of 
eruptions would produce a substantial 
increase in the quantity of available 
data as well as provide new kinds 
of information. Even though means 
exist for highly sophisticated and 
complete investigations, however, the 
number of eruptions that have been 
thoroughly exploited is negligibly 

The investigation of active vol- 
canoes requires cooperation between 
fairly small numbers (3 to 10) of well- 
qualified professional observers, with 
technical support (including commu- 
nications, logistics, and transporta- 
tion) to be provided at very short 
notice. The Smithsonian Institution 
has set up a facility to fill part of this 
need: The Center for Short-Lived 
Phenomena, in Cambridge, Massa- 
chusetts. The center serves effectively 
as an information source for scientists 
covering a number of specialties, in- 
cluding volcanology and geophysics. 
The center notifies potentially inter- 
ested scientists by telephone or wire 
of events such as volcanic eruptions; 
it then, on very short notice, organizes 
teams to visit the sites, ideally within 
24 hours. The function of the center 
is to dispense information and to 
organize logistics for adequately pre- 
pared individuals with their own 
funding. The number of such scien- 
tists is well below the number re- 



quired to monitor the world's inter- 
esting volcanic events, however, and 
the results fall far short of what is 
possible within the capabilities of 
modern transportation and modern 
data-gathering methods. 

Requirements of Science 

We expect advances in our under- 
standing of fundamental volcanic 

mechanisms to evolve from two broad 
types of investigations: 

Long-term investigations of in- 
dividual volcanoes, volcanic 
features, and volcanic fields. 
These studies will focus on 
the origin of the magmas and 
land forms and their evolution 
through time. The goal of the 
research is to develop the de- 
tails of the physical processes 

producing these features and 
the reasons for their evolution. 
Some of this kind of work is in 
progress in the United States. 

Well-coordinated and short- 
term field investigations of vol- 
canoes in eruption by teams of 
prepared and qualified scien- 
tists capable of responding on 
very short notice. This is a new 
kind of activity, not currently 
well organized. 





Long-Term Temperature Cycles and Their Significance 

The earth's climate results from 
three fundamental factors: 

1. The earth's mass, which pro- 
vides a gravitational field of 
sufficient strength to hold all 
gases released from the interior 
except hydrogen and helium; 

2. The amount of energy emitted 
by the sun, the distance of the 
earth from the sun, and the 
earth's reflectivity, which com- 
bine to provide surface temper- 
atures on earth suitable for the 
existence of a substantial hydro- 
sphere, including oceans, rivers, 
lakes, and, at certain times, con- 
spicuous ice masses; 

3. The astronomical motions of 
the earth which, together with 
the inclination of the earth's 
axis on the plane of the ecliptic, 
provide diurnal and seasonal 

If these three fundamental factors 
(and their components) were to re- 
main constant through time, the 
earth's climate would not change 
except for short-range phenomena re- 
lated to the hydro-atmosphere. Geo- 
logical history and direct human ob- 
servation show, however, that climate 
has changed and is changing conspic- 
uously, with variations ranging from 
a few to many millions of years. The 
causes for these changes are numer- 
ous and varied, and often multiple. 

Affecting mankind most, either 
favorably or unfavorably, are the 
changes that occur across time inter- 
vals ranging from tens of years to 
50,000 years. The former may en- 
courage men to undertake great agri- 
cultural and industrial activity in 

regions affected by climatic ameliora- 
tion, only to have their efforts de- 
stroyed when climate deteriorates; the 
latter have brought about the great 
glacial/interglacial cycles of the past 
million years, which strongly affected 
the entire biosphere and directed the 
course of human evolution. 

Short-Range Climatic Change 

Short-range climatic variations 
(years to centuries) have been moni- 
tored by direct observation since the 
dawn of recorded history, but accurate 
climatic measurements date only from 
the middle of the seventeenth century 
when the Accademia del Cimento of 
Florence and the Royal Society of 
London began their works. For more 
than a hundred years these observa- 
tions were restricted to Europe. 

Global climatic cycles for which an 
explanation is immediately clear are 
the diurnal cycle, due to the rotation 

of the earth, and the yearly cycle, due 
to the revolution of the earth around 
the sun. A 2.2-year cycle due to alter- 
nating easterlies and westerlies in the 
equatorial stratosphere also appears 
rather well established. If the effect 
of the daily, seasonal, and yearly 
cycles is eliminated, climatic records 
— including temperatures, pressure, 
precipitation, wind strength, and 
storm occurrences — may also exhibit 
apparent periodicities. Thus, in 1964 
Schove listed a dozen possible cycles, 
which ranged in wavelength from 2 
to 200 years. An apparent 20-year 
periodicity, for instance, is shown by 
the 10-year moving average tempera- 
ture record for July in Lancashire, 
England. A similar periodicity is not 
visible, however, in the temperature 
record for January. The problem is 
that an infinite record would be neces- 
sary in order to prove that a cyclical 
phenomenon is really stationary — 
i.e., that conditions at the end of a 
cycle are identical to those at the 


This graph indicates that the rise and fall of water level in Lake Victoria from 1900 
to the middle of the 1920's was correlated with the 11-year sunspot cycle. After 
that period, however, the correlation broke down. 



The Role of Solar Activity — An 
example of the erroneous conclusions 
to which inadequate analysis of cycli- 
cal phenomena may lead is shown in 
Figure III-l. A relationship between 
sunspots and water level in Lake Vic- 
toria may be inferred from the record 
between 1900 and 1925, but this rela- 
tionship breaks down completely after 
1925. As a matter of fact, the search 
for causal relationship between cli- 
mate and the solar sunspot cycle, 
which averages 11.2 years but ranges 
from 8 to 18 years, has proved rather 
unsuccessful. A climatic effect prob- 
ably does exist, but it is small and 
masked by other phenomena. 

Changes in solar activity may in- 
duce changes up to a factor of a 
thousand in the short wavelength re- 
gion of the solar spectrum, but this 
region represents only a hundred- 
thousandth of the total energy emit- 
ted by the sun. Thus, the change in 
solar energy output produced by vari- 
ations in solar activity is at most one 
percent. Work by the Smithsonian 
Institution has shown, however, that 
the amount of solar energy received 
at the outer boundary of the earth's 
atmosphere at the mean distance from 
the sun (the so-called "solar con- 
stant," equal to 1.3 million ergs per 
square centimeter per second) has re- 
mained constant within the limits of 
error of the observations during the 
past 50 years. 

Examples From the Past — Secular 
climatic changes are often impressive. 
For example, Lake Constance froze 
completely in the winter of 1962-63 
for the first time since 1829-30; Lake 
Chad poured water into the Sahara in 
1959 for the first time in 80 years; 
precipitation in northeast Brazil has 
decreased 50 percent during the past 
50 years; arctic temperatures rose 
some 2° centigrade between 1885 and 
1940; and the average temperature of 
the atmosphere and ocean surfaces 
increased 0.7° centigrade during the 
same time. Some of these changes are 
regional (i.e., temperature rises in one 
region while decreasing in an adja- 

cent one), but others, including the 
latter one just mentioned, are not. 

Global climatic changes ranging 
across time intervals of decades to 
many centuries are known from his- 
torical records and geological or pale- 
ontological observations. Exception- 
ally good weather prevailed in Europe 
between a.d. 800 and 1200, when 
glacier boundaries were about 200 
meters higher, when the Vikings sailed 
across the northern seas, and when 
Greenland received its name. A few 
centuries of colder climate followed: 
the Baltic froze solid in the winter of 
1322-23, an event that has not been 
repeated since; Iceland was blocked 
by ice for six months of the year dur- 
ing the first half of the seventeenth 
century (compared to 1-3 weeks 
today); and all Alpine glaciers read- 
vanced substantially in the same 
period. Since the beginning of the 
nineteenth century, climate has im- 
proved again. Whether these climatic 
changes are cyclical or not is not 
known, although "cycles" of 80 and 
200 years, presumably induced by 
solar changes, have been mentioned 
in the literature. 

Long-Range Climatic Change 

Climatic changes across longer time 
intervals (a few thousand years to 
millions of years) can only be inferred 
from the geological and paleontologi- 
cal records. The occurrence of mod- 
ern-looking blue-green algae in chert 
deposits dating from two billion years 
ago indicates that the radiation bal- 
ance of the earth has not changed 
much over this extremely long time 
interval. However, three times since 
the beginning of the Cambrian era, 
about 600 million years ago, the radia- 
tion balance of the earth has been 
sufficiently disturbed to produce con- 
spicuous glaciations. This happened 
during the Early Paleozoic (about 450 
million years ago) , Late Paleozoic 
(about 250 million years ago), and 
Late Cenozoic (the past few million 
years). At these times, ice-sheets 
some 2 kilometers thick repeatedly 

covered as much as 30 percent of the 
continental surface. 

Why Glaciation Occurs — For these 
major glaciations to develop, the radi- 
ation balance of the earth must have 
become negative with respect to its 
normal state during nonglacial times. 
That is, the amount of solar radiation 
reflected back into outer space must 
have become greater. Cooling of the 
earth by a decrease of incoming solar 
radiation does not seem likely be- 
cause, according to the 1953 calcu- 
lations of Opik, formation of the ice- 
sheets to the extent known would 
have entailed cooling the equatorial 
belt down to 8 centigrade, whereas 
the paleontological record indicates 
that warm-water faunas have existed 
ever since the beginning of the Cam- 
brian. Therefore, the radiative bal- 
ance of the earth must have become 
negative through the effect of terres- 
trial phenomena alone. 

Many such phenomena could have 
done the trick. For instance, an in- 
crease in continentality would have 
increased the earth's reflectivity and 
produced cooling, since land absorbs 
less solar energy than the sea. Dis- 
placement of continental masses to- 
ward high latitudes should favor 
glaciation. Finally, an increase in 
atmospheric haze produced by vol- 
canic activity and dust storms could 
have reduced the amount of solar 
energy reaching the earth's surface 
and, at the same time, reflected into 
space a portion of the incoming solar 
radiation. Once the earth's surface 
temperature is reduced below a certain 
critical value by one or another or a 
combination of these factors, ice may 
begin to develop. Ice is highly reflec- 
tive, of course, so that more ice means 
more solar energy reflected, lower 
temperatures, and even more ice. In- 
deed, ice appears to be self-expanding 
and to come to a stop only when the 
ocean has cooled so much as to pro- 
vide insufficient evaporation for feed- 
ing the ice-sheets. 

The pattern of glaciation is best 
known for the past few hundred thou- 



sand years, a time during which ice- 
sheets repeatedly formed and ad- 
vanced to cover North America as far 
south as a front running from Seattle 
to New York, and Europe as far south 
as a front running from London to 
east of Moscow. A substantial ice- 
sheet also repeatedly covered Pata- 
gonia, and mountain glaciers formed 
wherever high mountains and moun- 
tain ranges were available. The re- 
peated advances of the ice-sheets were 
separated by interglacial times during 
which all continental ice-sheets dis- 
appeared except those of Greenland 
and Antarctica. We are presently in 
the middle of one of these interglacial 

The advances and retreats of con- 
tinental ice have left the glaciated 
lands littered with glacial debris, 
ranging in size from fine sand and 
clays to boulders as large as a house. 
From the study of these sediments, 
geologists have concluded that the 
ice-sheets formed and swept across 
the northern continents at least five 
times during the recent past. The 
sediments are so mangled, however, 
that it is difficult to reconstruct a 
complete history of the glacial events. 

"Globigerina Ooze" — For a more 
complete record one must turn to 
the deep sea. About 40 percent of 
the deep ocean floor is covered with 
a sediment known as "Globigerina. 
ooze." This sediment is rich with the 
empty shells of planktonic Foramini- 
fera, microscopic protozoans freely 
floating near the surface when alive. 
Of the fifteen common species of 
planktonic Foraminifera, several are 
restricted to equatorial and tropical 
waters, several to temperate waters, 
and one to polar waters. When cli- 
mate changes, the foraminiferal spe- 
cies move north or south, and these 
movements are recorded in the sedi- 
ment on the ocean floor by alter- 
nating layers of empty shells belong- 
ing to warm, temperate, and cold 
species. Sediment core samples up to 
20 meters long have been recovered. 
Paleontological analysis of the chang- 

ing foraminiferal faunas through these 
cores reveals the climatic changes that 
occurred while the sediment was being 
deposited. In addition, it is known 
that foraminiferal shells formed dur- 
ing cold intervals contain a greater 
amount of the rare oxygen isotope 1S O 
than shells formed during warm in- 
tervals. Thus, oxygen isotopic analy- 
sis of the foraminiferal shells yields 
accurate information on the actual 
temperature of the ocean surface and 
its variations through time. The re- 
sults given by micropaleontological 
and isotopic analysis are essentially 

Because Globigerina ooze accumu- 
lates at the rate of a few centimeters 
per thousand years, a deep-sea core 
20 meters long reaches sediments half 
a million years old. Deep-sea cores 
can be dated by various radioactive 
methods, including radiocarbon and 
the ratio of thorium-230 to protac- 
tinium-231. Thus, climatic changes 
can not only be followed in continuity 
by studying deep-sea sediments but 
can also be dated. 

The study of many deep-sea cores 
from the Atlantic and the Caribbean 
has made it possible to reconstruct a 
continuous curve showing the tem- 
perature changes of the surface ocean 
water at low latitudes over the past 
425,000 years. This curve, shown in 
Figure III-2, exhibits a number of 

alternating high- and low-temperature 
intervals, with a gross periodicity of 
about 40,000 years. A comparison of 
this curve with the chronology of con- 
tinental glaciation, based largely on 
radiocarbon dating, shows that the 
most recent low-temperature interval 
(70,000 to 15,000 years ago) repre- 
sents the last major glaciation. One 
may safely infer that earlier low- 
temperature intervals of the oceanic 
curve represent earlier continental 

Sediments older than the oldest 
ones represented in Figure III-2 have 
been recovered recently from the 
ocean floor by the drilling vessel 
Clomar Challenger; analysis of these 
sediments, yet to be performed, 
should show how far back in the 
past temperature variations as large 
as those of Figure III-2 continue. 
For the time being, sections of older 
marine sediments now occurring on 
land have been used. One of these 
sections, representing sediment de- 
posited about 1.8 million years ago in 
southern Italy, shows that climatic 
variations as large as the most recent 
ones were already occurring at that 

The General Temperature Curve — 
The apparent periodicity of 40,000 
years is intriguing. No terrestrial 
phenomenon of the type described 
before is believed to take place with 


Changes in the ocean surface temperature over the past 425,000 years have been 
reconstructed from deep-sea cores. Present time is at the left of the graph. The 
numbers above the time axis are for reference, indicating the peak of the long-term 



such periodicity. These terrestrial 
phenomena last either much shorter 
times, like volcanic eruptions or dust 
storms, or much longer times, like 
changes in the relative position or 
extent of continents and oceans. 
There are, however, certain astronom- 
ical motions of the earth that occur in 
cycles of tens of thousands of years. 
Because of the attraction of the moon, 
the sun, and the planets on the bulge 
of the earth, the earth's axis precesses 
with a periodicity of 26,000 years; the 
obliquity of the ecliptic with respect 
to the terrestrial equatorial plane 
changes with a periodicity of 40,000 
years; and the eccentricity of the 
earth's orbit changes with a periodic- 
ity of 92,000 years. The result of 
these motions is that, in the high 
latitudes, periods of warm summers 
and cold winters alternate every 
40,000 years with periods during 
which summers are colder and winters 

Long before research on deep-sea 
sediments indicated the probable oc- 
currence of climatic cycles 40,000 
years long, the Serbian physicist Mi- 
lankovitch and the German meteor- 
ologist Kcippen had suggested that 
long periods of cool summers could 
trigger a glaciation even if accom- 
panied by warmer winters. They 
reasoned that winter is cold enough 
anyway at high latitudes for snow to 
accumulate on the ground, while cool 
summers would allow permanent 
snow to expand year after year. The 
earth's reflectivity would thus in- 
crease, temperature would decrease, 
more snow would accumulate, and a 
major glaciation would rapidly de- 

This theory was enlarged by Geiss 
and Emiliani to include plastic ice- 
flow, heat absorption by ice-melting, 
and downbuckling of the earth's 
crust under the weight of the ice- 
sheets in order to explain the disap- 
pearance of the major ice-sheets 
(Greenland and Antarctica excluded) 
at the end of each glaciation. As it 
now stands, the theory seems to ac- 

count for glacial and interglacial 
events and their time-scale during the 
recent past. It also accounts for the 
timing of high interglacial sea levels 
related to ice melting. That is, the 
times when summers were warmest, 
as calculated from astronomical con- 
stants, were also the times when sea 
level stood high as determined by 
radioactive dating of fossil shells and 

The generalized temperature curve 
of Figure 1II-2 shows, superimposed 
on the major oscillations, a number 
of smaller oscillations. Mathematical 
analysis of the original isotopic curves 
of the deep-sea cores has shown 
that these smaller oscillations are re- 
lated to the precession of the equi- 
noxes. Precession of the equinoxes is 
apparently also responsible for the 
occurrence of more than one high sea 
level during interglacial intervals, oc- 
curring whenever northern summers 
coincide with perihelion and resulting 
from partial or even total melting of 
Greenland ice. 

Figure III-3 shows the original oxy- 
gen isotopic curves for two deep-sea 
cores from the Caribbean. The hori- 
zontal scale shows the depth below 
the top of the core, the tops of the 
cores being on the left side (0 cm). 

The tops represent modern sediments, 
and the time-scale for the various 
cores can be evaluated by comparing 
each curve with the generalized tem- 
perature curve of Figure III-2. The 
vertical axis represents the 18 0/ u '0 
concentrations, which are inversely 
proportional to temperature. The 
more negative values, therefore, rep- 
resent higher temperatures. 

As shown in Figure III-3, isotopic 
values as negative as the ones occur- 
ring at the tops of the cores, repre- 
senting the present interglacial con- 
ditions, occur only occasionally below. 
Temperature was considerably lower 
than today during much of the previ- 
ous interglacial intervals, with periods 
of temperatures as high as today oc- 
curring only for a short time (a few 
thousand years at most) at the peaks 
of the previous interglacial ages. The 
present period of high temperature 
began about 8,000 years ago and 
reached a peak 2,000 years later. It 
was followed by a 2°-centigrade tem- 
perature decrease about 4,000 years 
ago, in turn followed by a l°-centi- 
grade increase. 

The Present Situation 

All these changes, short-term as 
well as long-term, regional as well as 



500 600 700 800 900 

1000 1100 1200 1300 1400 

Sediments formed from shells of microscopic protozoans are known to have a high 
concentration of '~0 when formed during cold periods. Therefore, high values of 
the ratio ls 0/"O indicate cool temperatures; low values indicate warm temperatures. 
The point at which the core is sampled can be dated by other means. Several 
centimeters of core represent a thousand years. 



global, must be carefully monitored 
and studied. Because the environ- 
mental balance is delicate, and be- 
cause it can be affected not only by 
natural changes but also by man- 
made ones, a thorough understand- 
ing of climatic history and dynamics 
is important indeed. Furthermore, 
many climatic events appear to result 
from triggering actions involving only 
a small amount of energy. An under- 
standing of these actions is important 
not only to prevent catastrophic cli- 
matic changes (as, for instance, the 
development of new glaciation or the 
melting in part or in whole of the ice- 
caps of Greenland and Antarctica) 
but also to develop methods for cli- 
matic control. 

Judging from the record of the past 
interglacial ages, the present time of 
high temperatures should be drawing 
to an end, to be followed by a long 
period of considerably colder temper- 
atures leading into the next glacial 
age some 20,000 years from now. 
However, it is possible, or even likely, 
that human interference has already 
altered the environment so much that 
the climatic pattern of the near future 
will follow a different path. For in- 
stance, widespread deforestation in 
recent centuries, especially in Europe 

and North America, together with in- 
creased atmospheric opacity due to 
man-made dust storms and industrial 
wastes, should have increased the 
earth's reflectivity. At the same time, 
increasing concentration of industrial 
carbon dioxide in the atmosphere 
should lead to a temperature increase 
by absorption of infrared radiation 
from the earth's surface. When these 
human factors are added to such 
other natural factors as volcanic erup- 
tions, changes in solar activity, and 
resonances within the hydro-atmos- 
phere, their effect can only be esti- 
mated in terms of direction, not of 

Long-Term Temperature Change — 
Because climatic changes across in- 
tervals of years to centuries are so 
much affected by the time-character- 
istics of our turbulent hydro-atmos- 
phere, no immediate breakthroughs 
are to be expected toward a global 
view of climatic dynamics across 
these intervals. Much progress has 
been made, however, in the study of 
climatic changes across longer time 
intervals, in which the intractable 
turbulent effects cancel out. Studies 
already under way concern: the am- 
plitude of the glacial/interglacial tem- 

perature change at different latitudes 
and in oceans other than the Atlantic, 
using oxygen-isotopic analysis of suit- 
able deep-sea cores; short-range 
(years to centuries) climatic changes 
through oxygen-isotopic analysis of 
deep-sea cores for anaerobic basins; 
and climatic change in the absence of 
ice on earth using deep-sea cores of 
Middle and Early Cenozoic and of 
Late Mesozoic age obtained by the 
D. V. Glamor Challenger. 

Prospects for Controlling Change 
— Judging from past results, the cur- 
rent and planned research should con- 
tribute importantly to our under- 
standing of many climatic problems 
related to the evolution of man's en- 
vironment and to the possibilities for 
altering or controlling it. Research 
conducted so far, for instance, has 
made much clearer the significance of 
the earth's reflectivity as a major cli- 
matic factor. If reflectivity is indeed 
so important, then control of today's 
earth reflectivity by plastic films or 
other means may be a way to control 
the climatic deterioration already 
under way. Again, because glaciation 
is essentially a runaway phenomenon, 
early control should be much easier 
than later attempts at modifying an 
already established adverse situation. 

Fluctuations in Climate Over Periods of Less Than 200 Years 

Factors thought to be responsible 
for climatic fluctuations of less than 
200 years' duration (and the sciences 
involved) are: 

1. Short-term fluctuations in solar 
radiation (solar and atmospheric 
physics) — There is no evidence 
of changes in the solar constant 
greater than 0.2 percent, al- 
though variations do occur in 
the particle (X-ray) flux and 
also, it is thought, in the ultra- 
violet bands shorter than 0.2 

Correlations have been estab- 
lished between ultraviolet flux 
and stratospheric ozone concen- 
trations, but the exact nature of 
the links between ozone and the 
intensity of the stratospheric 
circulation, and the conse- 
quences for the troposphere re- 
gime, are uncertain. 

2. Changes in atmospheric constit- 
uents (meteorology, atmos- 
pheric chemistry) — Carbon di- 
oxide (CO2) is a major absorber 
of infrared radiation from the 

earth, whereas aerosols (espe- 
cially dust) affect solar radiation 
by scattering and absorption. 
The ratio of absorption to scat- 
tering is about 5/3 for urban 
sites and 1/5 for prairie and 
desert areas. 

The nearly global rise of tem- 
perature in the first forty years 
of this century (see Figure III-4) 
has been attributed to increas- 
ing CO2 content, although this 
is by no means an accepted 
theory. The magnitude of the 




The graph shows the variation of mean annual temperature near the surface of the 
northern hemisphere during the past century. Variations in the world mean annual 
temperature are similar. The curve is based on data from several hundred stations, 
weighted for the area represented by each station. The data are expressed in 
terms of ten-year overlapping means of the departures from the 1885 90 mean. 

effect for a given increase in 
COi concentration is still in dis- 
pute. (A 1 -centigrade increase 
might require an increase in 
concentration of from as little 
as 25 percent to as much as 80 
percent; at present rates of COj 
increase, such a temperature in- 
crease would take between 50 
and 300 years.) 

Little is known about the ef- 
fects of volcanic and man-made 
dust. The net effect of major 
eruptions, such as that of Mt. 
Agung in 1963, on total solar 
radiation received amounted to 
a decrease of only 6 percent. 
Turbidity measurements indi- 
cate a widespread increase in 
atmospheric dust content in this 
century, but the role of man in 
this, and its meteorological con- 
sequences, are largely unknown. 

3. Air-sea interaction (meteorol- 
ogy, oceanography) — Changes 
of ocean surface temperatures 
appear to lag two to three years 
behind long-term atmospheric 

trends, but feedback effects — 
by which oceanic processes re- 
inforce an initial atmospheric 
trend — are of great impor- 
tance. Short-lived anomalies 
such as the dry summers in 
northeastern United States dur- 
ing the early 1960's can be 
attributed to persistent sea- 
temperature anomalies. Little 
is known about the factors de- 
termining the duration of a 
trend or its eventual reversal. 

4. Inherent variability in the at- 
mospheric circulation (meteor- 
ology, fluid dynamics) — The 
year-to-year patterns of devel- 
opment of the seasonal weather 
regimes seem to be essentially 
random. Thus, individual ex- 
treme seasons may occur in 
spite of a general trend in the 
opposite direction. Instances 
are the severe English winters 
of 1739-40 and 1878-79, both 
of which followed a series of 
mild winter seasons. An exten- 
sive snow cover or sea-tem- 
perature anomaly may help to 

trigger circulation changes if 
other conditions are suitable. 

On the 50- to 200-year time-scale, 
changes in the disposition of the 
wind-flow patterns between 10,000 
and 30,000 feet are important. There 
is evidence of an approximately 200- 
year fluctuation in the northern hem- 
isphere westerlies, with peaks in the 
early 1300's, 1500's, 1700's, and 
1900's and with shorter, less regular 
fluctuations superimposed. Weaken- 
ing of the westerlies in recent decades 
has been accompanied by an equator- 
ward shift of the wind belts. 

Finally, and most fundamentally, 
the extent to which global climate 
is precisely determined by the gov- 
erning physical laws is unknown. 
Theoretical formulations indicate the 
possibility that all changes in the 
atmospheric circulation need not be 
attributable to specific causes. That 
is, the atmosphere may not be a 
completely deterministic system. 

Although much recent attention has 
been given to changes in atmospheric 
constituents, it is not yet possible to 
be at all positive as to which, if any, 
of these four groups of factors are 
the major determinants of relatively 
short-term climatic fluctuations — 
i.e., those of less than 200 years' dura- 
tion. Even the secular changes in the 
strength of the wind belts and their 
latitudinal location, noted in (4) 
above, may be determined by forcing 
of extraterrestrial or terrestrial origin. 

Characteristics of the Fluctuations 

The amplitude of 100- to 200-year 
fluctuations of temperature is esti- 
mated to be of the order of 1 J centi- 
grade; decadal averages of winter 
temperature have a range of 2 centi- 
grade. It is generally accepted that 
the longer the duration of a climatic 
fluctuation, the larger is the area 
affected in any given sense and the 
greater is the response of vegetation, 
glaciers, and other "indicators." Thus, 
over the time period from the mid- 



nineteenth century to the present, the 
amplitude and incidence of tem- 
perature fluctuations in Europe and 
eastern North America are broadly 
similar. But certain of the minor 
fluctuations lasting perhaps 25 to 30 
years are missing, or of opposite 
direction, in some localities in these 
areas. Minor fluctuations within 
North America itself also show spatial 
and temporal irregularities. 

There is no clear evidence that any 
of the fluctuations are strictly peri- 
odic — i.e., that they are rhythms. 
Around the North Atlantic, the ampli- 
tude of the fluctuations over the past 
300 years or so appears to have in- 
creased while their duration has de- 
creased by comparison to the previous 
centuries. Periods of about 23, 45-60, 
100, and 170 years have been sug- 
gested, but not statistically estab- 
lished, from observational data and 
indirect historical records in Europe. 
Recent analyses of cores from the 
Greenland ice-cap also indicate fluctu- 
ations in ls O (oxygen-isotope) con- 
tents with a period of about 120 years. 

The spatial pattern of climatic 
change during this century is com- 
plex. Annual and winter tempera- 
tures increased over much of the 
globe from approximately 1890 to 
1940, especially in high latitudes of 
the northern hemisphere; net cooling 
has occurred over about 80 percent 
of the globe since the latter date. 
There is virtually no correlation be- 
tween the over-all global changes and 
trends in particular areas, which 
strongly suggests that the controls 
of regional fluctuations are distinct 
from those for global changes. 

In low latitudes, the most impor- 
tant fluctuations involve precipitation, 
with a decrease of the order of 30 
percent in many parts of the tropics 
around 1900. During most of the first 
half of this century, the equatorial 
rain belt tended to be narrower, and 
the tropical arid zone wider, than 
either during the preceding half- 
century or since the 1940/50's. This 

change did not affect monsoon Asia, 
but the same pattern of change oc- 
curred on the east coasts of Australia 
and North America up to about 40 
latitude. The other major area affected 
by precipitation change is central 
Asia, where the 1950/60's have been 
much drier. 

Interactions with Society 

Fluctuations in climate — either nat- 
ural or man-induced — can have im- 
portant economic and social implica- 
tions for man. For example, studies 
in England and the eastern United 
States since about 1940 indicate a 
return to conditions of the early nine- 
teenth century. What would be the 
implications of such a sustained de- 
terioration of climate in the middle 
and high latitudes of the northern 
hemisphere? What would be the 
effects of man's intentional or acci- 
dental modification of large-scale and 
local climate? 

Effects of Continuing Deterioration 
— Even beyond such frost-suscepti- 
ble, high-value crops as the Florida 
citrus fruits, farming activities can be 
markedly affected by changes in tem- 
perature and moisture balance, espe- 
cially those occurring in spring and 
fall. The growing season in England, 
for example, has shortened by an 
average of two weeks since 1950, as 
compared with the years 1920 to 
1940. Cereal cultivation was revived 
in Iceland only in the 1920's, after 
a gap of four centuries or more. Many 
more summers like that of 1969 — 
when sea-ice persisted along the 
northern coast for most of the sum- 
mer and grain harvests were ruined — 
could seriously threaten that coun- 
try's marginal economy. The effect 
on the fishing industry of Iceland 
(and other European countries) could 
also be serious. 

The increased frequency of severe 
winters in northwest Europe since 
1939-40, as compared with the pre- 
ceding twenty years, has been re- 

flected in greater disruption ol 
port and increased requirements ior 
domestic heating, winter fodder for 
cattle, and the like. Through a series 
of chain reactions, for example, the 
winter of 1^69-70 had a serious effect 
on the whole East German economy. 

The relationships between climatic 
changes and farming are generally 
nonlinear. In view of the second- 
er third-order interactions among 
weather, pests and diseases, soil, and 
crops, the implications of recent 
changes may be more significant agri- 
culturally than the basic climatic fluc- 
tuations might suggest. This is as true 
in temperate middle latitudes as it is 
in semi-arid or other marginal cli- 
mates. For example, lower air and sea 
temperatures in spring in northeastern 
North America are believed to have 
affected fish (especially salmon) by 
accentuating the sublethal effects of 

Effect of Man's Activities — Non- 
meteorologists tend to base estimates 
on assumptions of a constant mean 
and variance of the climatic elements. 
But there is a serious need to scruti- 
nize long-term weather/climate modi- 
fication schemes with respect to their 
possible interaction with climatic 
trends. Cloud-seeding programs de- 
signed to augment snowfall in moun- 
tain areas, for example, may increase 
avalanche hazard and spring-summer 
runoff. If the planned increase coin- 
cides with an unrecognized trend to 
greater precipitation (or falling tem- 
peratures), the effects may exceed 

Unintentional effects of man, 
through increased atmospheric pollu- 
tion (dust, carbon dioxide and other 
gases, supersonic aircraft trails in the 
stratosphere), are of international 
concern, particularly with respect to 
their health implications. The possi- 
ble broader effects on global climate 
and, directly or indirectly, on man's 
economic activities may be even more 
critical. Although the role of CO2 is 
reasonably well understood, the effect 



of dust on solar and terrestrial radia- 
tion is virtually unknown. 

In an historical context, it has been 
suggested that the Rajasthan Desert 
of northwest India may have origi- 
nated largely through overgrazing, 
with the resultant increase in atmos- 
pheric dust content leading to condi- 
tions that further decreased rainfall. 
Correct identification of natural and 
man-made tendencies is vital in such 
instances if attempts are to be made 
to reverse the processes. 

Evaluation of Current Status 

Data — The data base available for 
the study of climatic fluctuations last- 
ing less than 200 years is limited in 
a number of respects: 

Spatial coverage of climatic data 
covering approximately the last cen- 
tury and a half is restricted. Direct 
observations are particularly limited 
for the southern hemisphere gener- 
ally, the oceans, the high arctic, 
mountain areas in general, and parts 
of the tropics. Fortunately, more ex- 
tensive records are available for the 
European-North Atlantic sector, the 
area where climatic fluctuations have 
been pronounced. 

Climatic data available for a sub- 
stantial length of time is restricted to 
only a few categories, however — 
mainly temperature, precipitation, and 
pressure. Indices of volcanic activity 
and dust since the late seventeenth 
century are available. But records of 
solar radiation, atmospheric CO-, and 
other dust content, for example, exist 
only for shorter periods and provide 
a more restricted spatial coverage. 

Reconstruction of changes over the 
past two centuries is now possible, 
using (a) snow/ice cores from Green- 
land and Antarctica, which provide 
records of O 18 changes with good 
time-resolution over thousands of 
years, and (b) tree-ring indices in 
selected areas — the arid margins 
for moisture changes and the arctic 

(or alpine) margins for temperature 
changes. Other techniques for recon- 
structing past climates do not allow 
the necessary degree of time-resolu- 
tion, mainly because of inherent limi- 
tations in available dating methods. 

Data on the extraterrestrial and ter- 
restrial variables that may cause fluc- 
tuations are even more limited. Car- 
bon dioxide, for example, has been 
measured in a few places over the 
past 100 years but regular monitoring 
is very recent. The monitoring of tur- 
bidity has only just begun in a limited 
way, and extensive and reliable meas- 
urements are similarly available for 
only about a decade. Fluctuations in 
solar radiation will only be deter- 
mined from satellite data, although 
there are several centuries of sunspot 

Changes in other terrestrial vari- 
ables such as sea-surface tempera- 
tures, extent of snow cover, pack-ice 
and frozen ground, cloudiness, and 
total atmospheric vapor content can- 
not be assessed with sufficient accu- 
racy from available (or foreseeable) 
ground networks. Satellite monitor- 
ing will again be indispensable. 

The fact that "artificial" climatic- 
changes due to man's activities mav 
obscure, or accentuate, natural trends 
further complicates efforts to study 
climatic changes over the past 200 

Theoretical Formulations — The 
development of operative numerical 
models of the atmosphere and oceans 
which account for the major observed 
features of global climate represents 
a significant recent advance. Theoret- 
ical formulations are generally avail- 
able as far as atmospheric-circulation 
models are concerned, although theo- 
ries of "almost intransitive" systems 
need further development. Tidal phe- 
nomena in the atmosphere have been 
a subject of much recent study, but 
their possible implications for climatic 
fluctuations have not yet been estab- 
lished. Some phenomena — e.g., the 

scattering/absorption properties of 
aerosols; interactions between strato- 
spheric ozone and the general circula- 
tion — still present important theo- 
retical problems. 

Interactions — The possible impact 
of climatic fluctuations on man's 
activities — agriculture, fisheries, do- 
mestic heating, transportation, con- 
struction industries, and so on — 
appears to have been generally neg- 
lected, particularly in terms of mod- 
eling and long-range planning. Eco- 
system studies of the International 
Biological Program will provide some 
information pertinent to these prob- 
lems, but the difficulty with all short- 
term programs of this type is that 
climate tends to be regarded as an 
environmental constant. 

Some Controversial Topics — Con- 
cerning the stability of the arctic 
pack-ice, would it re-form under pres- 
ent climatic conditions if attempts 
were made to remove it? Data short- 
comings for this area and the problem 
of ocean and atmospheric advection 
of heat have prevented resolution of 
this question. 

It has been argued that the appar- 
ent recent increase of atmospheric 
turbidity may account for the down- 
turn of temperature since about 1940. 
If this were to be confirmed, a contin- 
ued deterioration could be expected, 
other things remaining constant. 

The problem of changes induced by 
turbidity is related to the more gen- 
eral, and equally important, problem 
of distinguishing between "natural" 
and man-induced climatic change. 
This is especially significant in assess- 
ing the actual and potential effects 
of large-scale, long-term weather/ 
climate modification programs. 

Instrumentation — The technical 
aspects of required instruments are, 
in general, adequately covered. With 
respect to determinations of atmos- 
pheric turbidity, however, the ap- 
plication of LIDAR (light detection 



and ranging) needs further evaluation 
and refinement. Similarly, routine 
availability for grid-points of all data 
collected by satellites is essential for 
maximum climatological use of the 

Adequate deployment (including 
long-term satellite coverage) presents 
the major problem. The number of 
long-term "benchmark" stations for 
measuring the variables referred to 
earlier, in addition to the climatic 
parameters, is inadequate for many 
regions of the globe. 

Requirements for Scientific 

The present climatic fluctuation 
may be of immediate economic signifi- 
cance for areas with marginal climate, 
especially in high latitudes. Over the 
longer term, possible changes else- 
where could be of major importance 
for the planning of agricultural pro- 
duction, architectural design, heating 
requirements, and transportation sys- 
tems. It may not be possible to fore- 
cast climatic fluctuations with anv 
confidence for a decade or more, if at 
all, but any planning should incorpo- 
rate the best advice of climatologists. 

Data Collection — Continued and 
intensified monitoring of atmospheric 
dust content, especially in mid-ocean 
and high-elevation sites, is needed. 
Satellite monitoring of global cloudi- 
ness, snow and ice cover, atmospheric 
vapor content, and sea-surface tem- 
perature, with routine data reduction, 
is also required. 

Data collection needs to be planned 
to continue on a long-term basis, 
through such programs as the Global 

Atmospheric Research Program and 
the World Weather Watch. The per- 
spective must certainly be global. (In 
this connection, it is worth noting that 
in much of tropical Africa basic data 
networks are now seriously reduced 
below what they were in the colonial 
era, and this will greatly restrict 
future analyses.) Planning for data 
collection is urgent within the next 
year or two. It cannot be stressed 
too strongly that studies of climatic 
change require a long series of 

Data Analysis — Exhaustive analy- 
sis of all available historical weather 
information, especially outside Eu- 
rope, is needed to provide perspective 
on the recent period. Historians 
could contribute significantly here. 
The reliability of the data must be 
assessed and it must be stored in a 
form suitable for application of mod- 
ern retrieval systems. 

Collection and synthesis of all 
available "historical" information may 
take twenty years. It will, however, 
provide essential information for con- 
tinued development of theory and 
prediction, and it should serve as a 
considerable stimulus to interdisci- 
plinary work and exchange of ideas 
in the fields that are concerned with, 
or affected by, climatic change and 
its implications. 

Dendroclimatic and snow/ice core 
studies should be extended to supple- 
ment direct records. Dendroclimato- 
logical work in the tropics (Africa 
and South America), especially near 
the alpine timberline, is particularly 

Further study is needed of the mag- 
nitude and spatial extent of fluctua- 

tions for different climatic parameters, 
and of rates of change. These might 
offer confirmation, or otherwise, of 
the existence of various rhythms. 

Advances in general atmospheric- 
circulation studies over the next dec- 
ade should greatly improve our 
understanding of the way in which 
the atmosphere responds to internal 
and external forcing functions. If 
the present climatic deterioration in 
middle and high latitudes of the 
northern hemisphere is part of a 50- 
to 100-year fluctuation, research over 
the next decade would be critical in 
terms of our "engineering" ability to 
cope with it adequately. 

Finally, analyses of air-sea feedback 
effects on various time-scales need to 
be undertaken. 

Numerical Model Experiments — 
Model experiments should provide 
definitive information on the effect 
of such variables as pack-ice extent, 
snow cover, and sea-temperature 
anomalies on the heat budget and on 
atmospheric circulation patterns. Ade- 
quate sophistication will probably be 
available for this work within two to 
five years. 

Work in progress should provide 
information on tropical-temperature 
and trans-equatorial links in the gen- 
eral circulation necessary to an under- 
standing of spatial aspects of fluctua- 
tions. It is not yet clear, however, 
whether or not this work will clarify 
understanding of the way in which 
seasonal weather patterns commonly 
develop in different manners in differ- 
ent years. This is fundamental to the 
possibilities of predicting short- or 
long-term fluctuations. 

Environmental Cyclic Behavior: The Evidence of Tree Rings and Pollen Profiles 

One of the major problems to be 
faced before we can arrive at an un- 
derstanding of environmental cyclic 
behavior is concerned with standard- 

izing definitions. Attempts at world- 
wide standardization of terms in cli- 
matic studies are being made. The 
Commission for Climatology of the 

World Meteorological Organization 
has published two suggested glossa- 
ries, one for various statistical charac- 
teristics of climatic change and the 



other for differentiating between the 
various time-scales of climatic change. 
Similar glossaries are needed for other 
aspects of the physical matrix making 
up the environment. 

"Operational" definitions are used 
here. "Environment" is considered to 
be the physical matrix in which orga- 
nized and unorganized matter exists. 
The term "cycle" refers to the com- 
plete course of events or phenomena 
that recur regularly in the same se- 
quence and return to the original 
state; in this sense, a cycle has a true 
harmonic course. "Cyclic" (or "cycle- 
like") refers to something that only 
roughly approximates a harmonic. 

Aside from "seasonal" patterns, no 
true harmonic behavior has been 
found in global or regional climatic 
patterns; the latter are cyclic patterns 
but they vary in duration and inten- 
sity. Several biological and natural 
processes reach such a degree of 

harmonics that they are sometimes 
called rhythms, but these rhythms are 
generally tied to seasonal climatic 

Tree growth and pollen production 
are, in a certain sense, a physiological 
response to the climatic conditions 
prevailing at the time these processes 
occurred. A thorough understanding 
of these processes leads to a better 
understanding of the immediate envi- 
ronment, and when old samples of 
tree rings (the long chronologies) and 
pollen production (the pollen profiles) 
can be located and studied, past local 
environmental conditions can be de- 
termined for those specific areas. Pub- 
lications are now appearing on cli- 
matic conditions over the past 15,000 
years or so, as interpreted by various 
authors. Although cyclical patterns 
appear in many of these interpreta- 
tions, the patterns are so obscure that 
little credence can be put on their 

Tree Rings and Environmental 
Cyclic Behavior 

Certain species of trees respond to 
physiological behavior by doing all of 
their yearly growth in a particular 
period of time. Thus, growth itself 
is harmonic. The amount of growth 
produced each year, however, varies 
in response to environmental changes. 
Trees in a uniform environment, or 
one that remains fairly constant year 
in and year out, produce tree rings 
of a uniform width over a given 
period of years. In contrast, trees 
growing in areas where environmen- 
tal changes are quite pronounced will 
reflect those changes in variable ring- 
widths for a given period of years. 
(See Figure III-5) In areas where one 
growth-controlling factor assumes 
dominance over the others, this factor 
can be isolated; variations of ring- 
widths then permit study of this par- 
ticular type of variable environmental 
condition. In certain areas the con- 






The photograph shows tree rings beginning about 1690 and ending about 1932. 
The rings were used to estimate whether the year was wet or dry; moisture was 
then computed to provide the graph in the lower part of the diagram. Since varia- 
tions in atmospheric circulation cause periods of wetness and dryness, the ring- 
width records can be calibrated with surface pressure and used to map anomalies 
of the atmospheric circulation for periods of time in which few if any historical 
data exist. 



trolling factor might be soil moisture, 
in others it might be summer temper- 
atures, and in still others it might be 
solar radiation. If too many variables 
enter the picture, to a point where 
they cannot be isolated, the growth 
patterns become "confused"; in the 
present state of knowledge, they are 
of little value for this type of study. 

Numerous studies are being con- 
ducted on tree growth. Those con- 
cerned with the bristlecone pine 
(Pinus aristata Engelen) in the White 
Mountains of eastern California are 
among the more important. Living 
bristlecone trees as old as 5,000 years 
or more have been studied and a good 
yearly growth chronology for that 
period of time has been developed. 
Bristlecone snags and other pieces of 
deadwood have enabled the chronol- 
ogy to be extended back for over 
7,000 years. Similar but shorter 
chronologies have been developed in 
other areas throughout many parts 
of the northern hemisphere. Some 
work has been done in the southern 
hemisphere but none has yet attained 
the length of the bristlecone studies. 

Although numerous studies on 
these tree-ring series have been made 
by meteorologists, climatologists, and 
statisticians, as well as dendrochron- 
ologists and others, no cyclic pattern 
has been detected in spite of the 
annual variation that exists. The non- 
uniform periods of good and poor 
growing conditions for the bristlecone 
show a cycle-like behavior. But be- 
cause of the wide variation in inten- 
sity and duration, one can "screen" 
the data to find almost any cycle 
length desired, or even none at all. 
These data appear to be promising 
from the standpoint of cyclic be- 

havior, but at present they are of lim- 
ited value. 

Pollen Profiles and Environmental 
Cyclic Behavior 

The number of pollen and spores 
recovered from any depositional se- 
quence is the result of a wide variety 
of variable factors which are not yet 
well understood. Climatic factors are 
involved in the production and dis- 
persal of pollen of the various wind- 
pollinated species; in addition, a dif- 
ferential is caused by preservation and 
recovery from the sediments. Experi- 
mental work is being done on almost 
every aspect of these wide variations, 
and there is hope that the future will 
see at least a reasonable solution to 
many of these problems. 

Profiles represent a random count 
of various wind-pollinated species re- 
covered from sediments. Seldom does 
the palynologist working with recent 
materials give an absolute pollen 
count of every grain present on the 
slide. These counts are treated in a 
statistical manner in an attempt to 
overcome bias caused by differential 
production (plants too close to the 
depositional area) or differential pres- 
ervation (oxidation). Such profiles 
give only a gross representation of the 
true situation. Furthermore, no an- 
nual variation in past pollen produc- 
tion or preservation can be detected 
from such profiles unless the variation 
is frozen into annual deposits such as 
a varved clay sequence. 

Pollen profiles are being interpreted 
as essentially representing the vegeta- 
tive cover existing at the time the pol- 
len was produced. The vegetative 
cover was, in turn, a response to envi- 

ronmental conditions during 
periods, and those environmental con- 
ditions are interpreted as being of 
climatic significance. 

Present Status 

There is no question that, under cer- 
tain environmental conditions, plants 
produce different amounts of growth 
in the annual layers of wood, and dif- 
ferent amounts of pollen are produced 
and dispersed during the pollen- 
production seasons. Tree-growth and 
pollen studies are still, however, in 
what one could call a primitive state. 
We are only now learning what the 
problems actually are. As soon as the 
problems can be better defined, con- 
centrated effort can be made toward 
their solution. At the present time, 
only trends can be detected in the 
various environmental conditions; no 
scientific prediction can yet be made 
from these trends. 

In general, more physiological 
studies are needed regarding the con- 
nection between environment and 
tree-ring growth, especially in quanti- 
tative amounts. Such studies need to 
be made on a variety of species grow- 
ing under a wide variety of condi- 
tions. Once these measurements are 
made and understood, considerable 
statistical work (computer analysis) 
will be necessary to reduce the data to 
usable forms. We are still in need of 
better knowledge on pollen produc- 
tion and dispersal, on pollen preserva- 
tion and recovery, and on statistical 
(or computer) analyses of recovered 
grains. These studies will be of 
limited value, however, if we do not 
also have a much better understand- 
ing of all aspects of the physical 
matrix comprising the natural envi- 




Basic Factors in Climatic Change 

It is useful to introduce the prob- 
lem of climatic change by considering 
the definition of climate. Practical 
definitions of the term "climate" vary 
in their specifics from one authority to 
another. All are alike, however, in 
distinguishing between climate and 
weather (and between climatology 
and meteorology) on the basis that 
climate refers to "average" atmos- 
pheric behavior whereas weather 
refers to individual atmospheric events 
and developments. On the face of it, 
then, it might seem that we are left 
simply with the decision of what time 
interval to choose over which to aver- 
age the observed weather into "the 
climate." By "average" is meant aver- 
age statistical properties in all re- 
spects, including means, extremes, 
joint frequency distributions, time- 
series structure, and so on. 

Climatic Change as a Fundamental 
Attribute of Climate 

Were atmospheric behavior to pro- 
ceed randomly in time, the problem 
of defining climate would reduce to a 
straightforward exercise in statistical 
sampling. We could make our esti- 
mate of climate as precise as we wish 
merely by choosing an average inter- 
val that is sufficiently long. One diffi- 
culty arises immediately because our 
knowledge of past atmospheric be- 
havior becomes less and less detailed 
(and less and less reliable) the further 
back in time we go. But there is an- 
other, more important difficulty: If 
our knowledge of past climates is im- 
precise, it is at least good enough to 
establish that long-term atmospheric 
behavior does not proceed randomly 
in time. Changes of climate from one 
geological epoch to another, and ap- 
parently also those from one millen- 

nium to another, are clearly too large 
in amplitude to be explained as ran- 
dom excursions from modern norms. 

When one examines modern recon- 
structions of the paleoclimatic rec- 
ord, one might be led to suppose that 
geological changes of climate — such 
as those associated with the alternat- 
ing glacials and interglacials of the 
Pleistocene ice age — are smoothly 
varying functions of time, readily dis- 
tinguishable from the much more 
rapid variability of year-to-year 
changes of atmospheric state. In 
other words, one might suppose that 
each part of a glacial cycle has its own 
well-defined climate, just as each sea- 
son of the year is revealed by modern 
meteorological data to have its own 
well-defined climate. In such a case, 
the averaging interval needed to ob- 
tain a stable estimate of present-day 
climate should be long enough to sup- 
press year-to-year sampling variabil- 
ity, but short in comparison to the 
duration of a glacial cycle. 

If we succumbed to the foregoing 
rationale for defining climate, we 
would probably be living in a fool's 
paradise. The reason is simple 
enough: the apparent regularity of 
atmospheric changes in the geological 
past is only an illusion, attributable to 
the inadequate resolving power of 
paleoclimatic indicators. Most such 
indicators act to one degree or another 
as low-pass filters of the actual cli- 
matic chronology. If our more recent 
experience — based on relatively 
higher-pass filters such as tree-rings, 
varves, ice-cap stratigraphy, and pol- 
len analysis applicable to post-glacial 
time — is any guide, the state of the 
atmosphere has varied on most, if not 
all, shorter scales of time as well. 

In other words, the variance spec- 
trum of changes of atmospheric state 
is strongly "reddened," with low- 
frequency changes accounting for rel- 
atively large proportions of the total 
variance (in the broadband sense). 
At the same time, important gaps in 
the spectrum of climatic change have 
yet to be identified and may not even 
exist. Taken together, these circum- 
stances imply that there may be no 
such tiling as an "optimum" averag- 
ing interval, and therefore no assur- 
ance that we can define (let alone 
measure) a unique, "best" estimate of 
what constitutes average behavior of 
the atmosphere. 

To summarize, atmospheric state is 
known to vary on many scales of 
time, and it cannot be ruled out from 
present knowledge that it varies on all 
scales of time (from billions of years 
all the way down to periods so short 
that they are better defined as mete- 
orological variability). Thus it can be 
argued that the very concept of cli- 
mate is sterile as a physical descriptor 
of the real world as long as it adheres 
to the classical concept of something 
static. In any event, present-day cli- 
mate is best described in terms of a 
transient adjustment of atmospheric 
mean state to the present terrestrial 

The Problem of Causes 

If climate is inherently variable, as 
here suggested, different interpreta- 
tions can be lent to the variability. 

The "Slave" Concept — One inter- 
pretation is the conventional one, 
which can be called the "slave" con- 
cept of climatic change. This em- 



bodies the idea that the average 
atmospheric state is virtually indis- 
tinguishable from an equilibrium 
state, which in turn is uniquely con- 
sistent with the earth-environmental 
conditions at the time; in this view, 
the atmosphere requires a relatively 
short time to adjust to its new equilib- 
rium state when the earth-environ- 
mental conditions change. 

The "Conspirator" Concept — An- 
other interpretation can be called the 
"conspirator" concept of climatic 
change. This concept considers that 
the average atmospheric state is in- 
fluenced as much by its own past 
history as by contemporary earth- 
environmental conditions, that there 
may be more than one equilibrium 
state that is consistent with those en- 
vironmental conditions, and that the 
choice of equilibrium state approxi- 
mated by the actual atmospheric state 
at any given time depends upon the 
antecedent history of the actual state. 

Sliarp Distinctions — The distinc- 
tions between these two concepts is 
sharp for long-period climatic change, 
such as the change from Tertiary to 
Quaternary times. On such a time- 
scale, the dynamic and thermody- 
namic time-constants of atmospheric 
processes are infinitesimal, even if one 
chose to include the oceans and the 
polar ice-caps as coupled "atmos- 
pheric" processes. As now seems 
plausible, earth -environmental 
changes included gradual sea-floor 
spreading and continental drift, to- 
gether with a gradual increase of 
average continental elevation. It is 
usually assumed that the climate acted 
in keeping with the "slave" concept 
throughout and that, after a certain 
point in the course of continental drift 
was reached (perhaps when the Arc- 
tic Ocean was isolated), the equilib- 
rium climate was transformed in a 
deterministic manner from a glacial- 
inhibiting pattern to a glacial-stimu- 
lating pattern. 

On the other hand, it is possible to 
argue, following Lorenz, that the 

actual climate of the Quaternary was 
not necessarily preordained by its 
contemporary environmental state; 
that the evolution of climate to its 
Quaternary mode was not a deter- 
ministic evolution but a probabilistic 
one that might have turned out very 
differently under identical conditions 
of continental drift and other environ- 
mental change. The different Quater- 
nary outcomes (two or more) would 
have followed from differences in the 
precise course of the climate itself, 
due either to transient environmental 
disturbances or perhaps to "random" 
excursions of atmospheric state along 
the way. 

Subtle Distinctions — With regard 
to relatively rapid climatic change, 
however, the distinction between the 
"slave" and the "conspirator" con- 
cepts of change is much more subtle 
in character, and perhaps unrecog- 
nizable within present bounds of 
either theory or observation. The rea- 
son for this is to be found in the inti- 
mate dynamic and thermodynamic 
coupling that exists between the at- 
mosphere and the oceans, and to a 
lesser extent in the coupling between 
the atmosphere, the oceans, and the 
polar ice-caps. These couplings intro- 
duce long time-constants into the 
changes of atmospheric state, and re- 
sult in various forms of autovariatiou 
in the total system, on the time-scale 
of decades and centuries. In the 
course of such autovariation, the at- 
mosphere itself may be said to obey 
the "slave" principle. But in a rela- 
tively limited period of years, the 
coupled atmosphere-ocean system 
would exhibit changes of state that 
are not independent of its initial state. 
In this case, the system is more prop- 
erly described as obeying the "con- 
spirator" principle. To complicate 
matters further, it is conceivable that 
the autovariation of the atmosphere- 
ocean system is riding on top of a 
transient of the Lorenz type already 

In the presence of Lorenz-type 
transients, the effect of systematic en- 

vironmental changes on p 
climate (changes, for exam] 
ing secular increases of carbc 
oxide (CO:;) or other consequences ot 
human activities) might be so badly 
confounded as to be totally unrecog- 
nizable. Even without such transients, 
however, atmosphere-ocean autovari- 
ation could effectively obscure the 
effect of systematic environmental 
changes that we are seeking to dis- 

Rationale for the Isolation of 

Human from Natural Factors in 

Climatic Change 

What rationale, then, are we to fol- 
low in establishing the climatic effects 
of systematic environmental change 
on the scale of decades and centuries? 
More specifically, how do we go about 
the task of isolating the contribu- 
tion of man's activities to twentieth- 
century climatic change? 

First of all, there seems no real pos- 
sibility of detecting Lorenz-type tran- 
sients in present-day climate, so we 
will have to proceed on the assump- 
tion that they are not now occurring 
nor are they likely to be induced in 
the foreseeable future by further 
environmental change from human 

Second, while we should not hesi- 
tate to use presently available esti- 
mates of the climatic effects of atmos- 
pheric pollution and other forms of 
environmental change as an interim 
guide in assessing the potential cli- 
matic hazards of various human ac- 
tivities, we should also remember that 
such estimates are highly tentative. 
We should take pains not to put 
undue confidence in them. 

Shortcomings of the Present Data 
Base — In this connection, there are 
two important points to consider: 

1. Most present estimates of the 
climatic impact of human activ- 
ities are based on relatively 



simple hydrostatic heat-balance 
models (as refined, for example, 
by Manabe and used by him to 
estimate the thermal effect of 
variable CO-, stratospheric wa- 
ter vapor, surface albedo, and 
the like). Manabe himself has 
often stressed the limitations of 
such models, the most impor- 
tant of which are: that they do 
not take account of atmospheric 
dynamics other than purely 
local convective mixing; and 
that they do not take into ac- 
count changes of atmospheric 
variables other than the varia- 
ble that is explicitly controlled 
as a parameter of the calcula- 
tion (plus water vapor in those 
experiments stipulating a con- 
stant relative humidity). 

2. Climatic changes caused by nat- 
ural agencies, and those possi- 
bly caused by human agencies, 
are not necessarily additive. For 
example, by analysis of past 
data on CO- accumulation in 
the atmosphere, roughly 50 per- 
cent of all fossil COj added to 
the atmosphere appears to have 
been retained there. Using pub- 
lished United Nations projec- 
tions of future fossil CO- 
production, together with a 
constant 50 percent retention 
ratio, it can be predicted that by 
a.d. 2000 the total atmospheric 
CO- load will have exceeded its 
nineteenth-century baseline by 
more than 25 percent. As 
pointed out by Machta, how- 
ever, recent atmospheric CO- 
measurements at Mauna Loa 
and other locations indicate that 
the atmospheric CO- retention 
ratio has been dropping steadily 
since 1958, to a present value 
of only about 35 percent. 

It may be significant that the 50 
percent retention figure applied to a 
time when world average tempera- 
tures were rising, and that the ob- 
served decline since 1958 applies to a 
time when world average tempera- 
tures have been falling. It is conceiv- 

able, though certainly not proven, 
that the reversing trend of world cli- 
mate in recent years has somehow 
altered the rate at which the oceans 
can absorb fossil CO2. If this is the 
case, we are witnessing an interactive 
effect whereby climatic changes pro- 
duced by one agency (presumably a 
natural one) are at least temporarily 
reducing the climatic impact of an- 
other agency (in this case, an inad- 
vertent human one). Such interactive 
effects are very poorly understood, 
and yet they may be a very important 
element in the evolution of present- 
day climate. 

The Use of Advanced Mathematical 
Models — To return to our question 
of what rationale we should follow in 
our study of contemporary climatic 
change and of human influences on 
climate, we are left with little choice. 
We have to rely on the development 
of advanced mathematical models of 
the global atmosphere that will be 
suitable for long-term integration to 
generate stable climatological statistics 
and will be capable of simulating 
many dynamic and thermodynamic 
processes in the atmosphere and at 
the earth's surface. Relatively sophis- 
ticated models of these kinds have 
already been developed, at least one of 
which has been expanded to deal with 
coupled atmosphere-ocean systems. 
Experiments with such models have 
begun to lay a solid foundation for a 
quantitative theory of global climate 
and have elucidated the climate- 
controlling influence of the general 
atmospheric and oceanic circulations. 
There appears to be no limit to the 
refinement possible in such models, 
other than the limits imposed by com- 
puter capacity and speed. 

The manner in which such numeri- 
cal experiments bear on the study of 
climatic change is essentially twofold: 

1. The experiments verify that a 
wide range of environmental 
factors have a bearing on the 
global pattern of atmospheric 
circulation and climate. They 

confirm that the most impor- 
tant factors in this respect are: 
(a) solar emittance; (b) the 
geometry of the earth-sun sys- 
tem including the orbital and 
axial motions of the earth; 
(c) the distribution of oceans 
and land masses; (d) the state 
of the ocean surface which, 
along with the juxtaposed at- 
mospheric state, governs the 
fluxes of energy, moisture, and 
momentum across the surface; 
(e) the state of the land surfaces 
with respect to albedo, thermal 
capacity, water and ice cover, 
relief, and aerodynamic rough- 
ness; and (f) the gaseous and 
aerosol composition of the at- 
mosphere itself. To the extent 
that all of these factors may 
vary with time, either slowly 
or rapidly, in response to forces 
other than the contemporary at- 
mospheric state itself, all such 
factors are automatically to be 
regarded as potential causes of 
climatic change. 

2. In the numerical experiments, it 
is possible to simulate the be- 
havior of circulation and cli- 
mate as a function of arbitrarily 
chosen boundary conditions 
and atmospheric constituency, 
which enter the experiments as 
controllable parameters. This 
makes it possible to vary any 
of the environmental factors 
listed above and determine how 
the circulation and climate re- 
spond. In this way, various 
theories of climatic change can 
be tested in terms of their mete- 
orological consistency. With the 
further refinement of joint at- 
mosphere-ocean models, the 
more realistic modeling of con- 
tinents and ocean basins, and 
the introduction of ice-cap in- 
teractions into the models, the 
range of factors in climatic 
causation that are amenable to 
this kind of study will eventu- 
ally became almost exhaustive 
of all reasonable possibilities. 



Factors related to human activi- 
ties would, of course, be in- 

Other Requirements — As neces- 
sary as such model experiments may 
be to the study of climatic change, it 
is important to realize that they are 
not sufficent to solve the problem of 
climatic change. It is not enough that 
we develop the ability to measure the 

response of climate to varying envi- 
ronmental conditions. If we are to 
decipher past climatic changes or to 
predict future changes, it is necessary 
to determine which environmental 
controls of climate have been (or will 
be) doing the varying, at what rate, 
and in what direction. 

At present, there is a deplorable 
lack of understanding about the vari- 

ability of our environment. There can 
be no guarantee that the necessary 
understanding will ever be acquired in 
full, for that in turn may depend on 
unknowable past events and unpre- 
dictable future events. But we should 
learn what we can, for in no other 
way can we be certain whether the 
climatic changes of the twentieth cen- 
tury are or are not causally related to 
man's activities. 

The Radiation Balance 

It has become evident that man can 
change the entire atmosphere of his 
planet in certain subtle ways. And he 
can modify large regions in rather 
obvious ways — for example, with 
smoke and smog. There are now new 
dimensions to his leverage on the at- 
mosphere as he flies large jet aircraft 
even higher in the stratosphere, and 
as booster rockets of the Saturn class 
introduce hundreds of tons of exhaust 
into the thin reaches of the upper 

The effects of these changes on the 
environment are diverse. We have re- 
cently become aware of the evident 
and sometimes acute effects of pollu- 
tion in the world's cities. Now we 
realize that man may even be able to 
change the climate of the earth. This 
is one of the most important questions 
of our time, and it must certainly rank 
near the top of the priority list in 
atmospheric science. 

General-Circulation Models 
of the Atmosphere 

In recent years, it has been possible 
to create fairly realistic numerical 
models of the global atmosphere that 
behave very much the way the real 
atmosphere does. (See Figure III-6) 
The atmosphere is a great heat engine 
that runs on solar energy, taking ad- 
vantage of the greater amount of heat 

that reaches the equatorial zone. The 
function of the heat engine is to trans- 
port this heat from the equator to the 

poles. In the process, the atmosphere 
moves with the patterns of the winds 
seen on any weather map. 


Sea level pressure distribution for the western hemisphere has been simulated by 
a numerical model developed by the National Center for Atmospheric Research. 
The time is 42.83 days into the simulation and represents a moment in a typical 
January. The contour interval is 5 millibars. White areas represent clouds. The 
influence of these clouds is taken into account in the radiation calculations. Areas 
of high pressure are indicated by H, areas of low pressure by L. 



Modeling of the atmospheric heat 
engine is complicated by the existence 
of another, more sluggish but massive 
heat engine — namely, the oceans. 
While the ocean does not move as fast 
as the atmosphere, its tremendous 
heat capacity more than offsets its 
slow movement. The ocean circula- 
tion is coupled to that of the atmos- 
phere, and nearly as much heat is 
transported from equator to pole in 
the oceans as in the atmosphere. 

The key to this atmosphere-ocean 
system, the ultimate driving force, is 
the solar radiation that is absorbed, 
mostly in the equatorial regions, and 
the infrared radiation that is emitted 
back to space at all latitudes. One 
cannot consider the heat involved in 
radiation, however, without also con- 
sidering the internal heat released into 
the atmosphere bv the condensation 
of water vapor. In fact, most of the 
heat that is transported from the 
equator to the middle latitudes is in 
the form of the latent heat of water 
vapor, heat that is released whenever 
it rains or snows. 

Experimental general -circulation 
models of the atmosphere that have 
been run on large computers at the 
National Center for Atmospheric Re- 
search (NCAR), the Geophysical Fluid 
Dynamics Laboratory of the National 
Oceanic and Atmospheric Adminis- 
tration (NOAA), and the University 
of California at Los Angeles also take 
into account the effect of the moun- 
tain ranges of the world, the rotation 
of the earth, and the complex proc- 
esses that exchange heat, moisture, 
and momentum vertically by means 
of convection, particularly in the 
tropics. All these processes can be 
related to each other by a set of dif- 
ferential equations that involve time. 
A model is made to "run" by integrat- 
ing these equations in small time- 
steps, and the result is a model of a 
moving fluid system that behaves 
very much like the real atmosphere. 

Future Refinements — With these 
general-circulation models we can, in 

principle, do "experiments" to learn 
how the atmosphere would change 
with time if there were a change, for 
example, in the ability of the atmos- 
phere to transmit solar radiation due 
to smoke, haze, or smog, or how it 
would change if there were a growth 
or shrinking of the size of the great 
polar ice-caps. 

Actually, however, we are still a 
long way from realizing a model that 
is adequate for such experiments in 
"climatic change." The current gen- 
eral-circulation models are designed 
to show the hour-to-hour, day-to-day, 
and week-to-week changes; we would 
run out of computer time if we used 
them to study really long-term 
changes. Long-term changes in this 
system would certainly involve 
changes in the ocean. Hence, it would 
not be enough to consider only the 
circulations of the atmosphere. Never- 
theless, there is hope that, in time, we 
will be able to develop theoretical 
numerical models with which to con- 
duct experiments on the atmosphere- 
ocean climate and how it will change 
with changes in the heat available to 
the system. These models will require 
a considerable effort in developing 
quasi-statistical shortcuts and the 
availability of larger computers than 
we have now. 

The Radiation Budget 

As mentioned, radiation is the ulti- 
mate source of energy to drive the 
complex atmosphere-ocean system. In 
order to gain an idea of the role that 
radiation plays in keeping the system 
in motion, we can perform a simple 
calculation of the rate of energy input 
from the sun as compared to the 
amount of energy that the atmos- 
phere contains at any time. The solar 
radiation absorbed by the system is 
about 600 calories per cirr per day, 
and the average total thermal heat 
energy of the atmosphere is about 
60,000 calories per crrr. This means 
that if the solar radiation were cut off 

abruptly, about 10 percent of the 
energy of the atmosphere would dis- 
appear within ten days. This is enough 
to cause an appreciable change in the 
circulation. Such a rough calculation 
indicates that the atmosphere will re- 
spond to a change of heat input in a 
week or less. 

Determinants of the Earth's Albedo 
— The solar radiation that reaches 
the earth is partly reflected back into 
space, partly absorbed by the atmos- 
phere, but mostly absorbed by the 
surface. (See Figure III-7) In the 
1940's it was estimated that about 
40 percent of the solar radiation was 
reflected back to space, but more re- 
cent estimates, based largely on satel- 
lite observations, have been lowered 
to about 30 percent. This average re- 
flectivity is referred to as "the earth's 
albedo." The fact that there has been 
such a large uncertainty as to the 
magnitude of the albedo is testimony 
to our general uncertainty about the 
amount of energy available to the 

Cloud Cover — Another important 
variable is the cloud cover, since 
clouds are generally much more highly 
reflecting than the surface of the 
earth. The same is true of snow and 
ice. An increase in cloud cover in- 
creases the albedo. For example, a 
change of 5 percent in the average 
cloudiness of the equatorial zone, an 
amount that would go unnoticed, 
would change the albedo of the earth 
by about 1.5 percent. This would 
represent an appreciable decrease in 
the energy available to drive the 
atmosphere-ocean system. 

The same effect as a decrease in 
amount of cloud cover would be 
achieved by a decrease in the reflec- 
tivity of the clouds, since both would 
decrease the net albedo of the earth. 
Clouds moving over regions with in- 
dustrial pollution, such as Europe, 
show a decrease in reflectivity from 
about .95 (for pure water clouds more 
than half a kilometer thick) to .80 or 
.85. This much reduction in cloud 



Figure MI- 




nJ^t AND EMISSION ~-~yL~~ / ] ' 

f\ ^7\thermal 







( ~ 




•;\ AND 


AND 5 




The diagram indicates the major components in the global radiation balance. The 
albedo, or reflectivity, is composed of the radiation reflected from the ground, 
clouds, aerosols, and other materials that might scatter incoming solar radiation. 

reflectivity, in the unlikely event that 
all clouds were so affected, would 
have about the same effect as a 5 per- 
cent reduction in cloud amount. 

Budyko, at Leningrad, and Sellers, 
at the University of Arizona, have 
taken this energy calculation one step 
further, arguing that a decrease of 
only 1.6 to 2.0 percent in the solar 
radiation available to the earth would 
lead to an unstable condition in which 
continental snow cover would ad- 
vance all the way to the equator, with 
the albedo raised by the greater snow 
cover to the point where the oceans 
would eventually freeze. Lest this 
rather frightening calculation be taken 
too seriouslv, it should be mentioned 
that there is no evidence that a mech- 
anism for a change of as much as 1.5 
percent actually exists, or ever has in 
the history of the earth. The model 
nevertheless illustrates the delicacy of 
our planet's thermal balance. 

Aerosols — The aerosols that fill 
the atmosphere — natural haze, dust, 

smoke, smog, and so on — probably 
play an important role in the radiation 
balance of the earth, but this is one of 
the great uncertainties in the theory 
of how the atmosphere behaves. 
Aerosols in cloudless air probably in- 
crease the albedo to some extent, 
and they absorb sunlight themselves. 
Also, as we have noted, they can 
change the reflectivity of clouds. We 
are quite certain that variations in the 
solar radiation absorbed by the earth's 
atmosphere and surface, due to 
changes in turbidity or total aerosol 
content of the atmosphere, are signifi- 
cant. Furthermore, as will be noted 
below, aerosols in the atmosphere can 
be greatly affected by man and vol- 
canic activity. 

Factors Affecting Loss of Terrestrial 
Heat — On the other side of the 
ledger by which we keep track of the 
amount of heat into and out of the 
atmosphere-ocean heat engine is the 
loss to space of terrestrial heat by in- 
frared radiation. Over a period of a 
year or so, the amount of radiation 

lost by infrared radiation must almost 
exactly balance the amount of solar 
radiation absorbed by the earth and 
its atmosphere. If this did not hap- 
pen, the earth would rapidly heat or 

As a general principle, any sub- 
stance in the atmosphere that absorbs 
infrared radiation will slow the cool- 
ing of the surface. The reason for this 
is that the energy radiated from the 
surface is absorbed by the absorbing 
substance in the atmosphere, thus 
heating the atmosphere which in turn 
radiates back toward the ground. In 
effect, an absorbing layer acts as a 
radiation blanket, and its presence 
will result in a higher surface tem- 

An auxiliary effect of this absorb- 
ing blanket will be an increase in the 
stability of the lower part of the at- 
mosphere, between the surface and 
the absorbing layer. This increase in 
stability will reduce convection in the 
lower layers. The ability of the at- 
mosphere to stir itself by convection 
is a principal source of cumulus 
clouds, so that a decrease in convec- 
tion would also decrease precipitation. 

Infrared Absorbers — There are 
two main classes of infrared absorbers 
in the atmosphere: trace gases (water 
vapor and carbon dioxide (CO;;) being 
the most important in the lower at- 
mosphere) and aerosols of all kinds, 
including clouds. Various estimates 
have been made of the effect of in- 
creasing COj in the atmosphere, since 
man has in fact been able to raise 
the total amount through burning fos- 
sil fuels. Since 1900, the amount of 
CO2 has increased an average of 10 
to 15 percent, and this trend has 
usually been cited to account for the 
observed rise in the average surface 
temperature of 0.2 centigrade up to 
1940. The theoretical calculations of 
Manabe and Weatherald indicate that 
a doubling of the COj content in the 
atmosphere would have the effect of 
raising the temperature of the atmos- 
phere (whose relative humidity is as- 



sumed to be fixed) by about 2 centi- 
grade, an appreciable change. 

The role of aerosols in the radiative 
balance cannot be calculated with 
anything like the certainty of that for 
carbon dioxide. Various estimates 
have been made of the effect of 
aerosols, with conflicting results. The 
principal effects of aerosols are to in- 
crease the scattering of sunlight in the 
atmosphere and also to absorb sun- 
light, the two effects being about 
equal. Thus, Robinson, in England, 
reports an average decrease of 25 per- 
cent in the amount of sunlight reach- 
ing the surface due to aerosols, and 
presumably at least half of this 
amount went into heating the atmos- 
phere. In clear air, such as that found 
in the polar regions, the effect of 
aerosols is much less, but in the 
tropical zone the turbidity of the at- 
mosphere, probably due primarily to 
natural haze from vegetation, is high 
all the time. 

Man-Made Aerosols — Aerosols 
should be taken into account in any 
calculation of the radiative balance of 
the earth-atmosphere system, but the 
fact is that we do not yet know how 
to do this with certainty. Further- 
more, there is the practical question 
of how man-made aerosols compete 
with natural aerosols. 

The haze observed in many parts of 
the world far from industrial sources 
originates chiefly in the organic mate- 
rial produced by vegetation, with 
large contributions from sea salt from 
the ocean and dust blown from dry 
ground. At times, volcanic activity in 
the tropics produces a worldwide in- 
crease of the aerosol content of the 
high atmosphere. It is estimated by 
Budyko, for example, that the solar 
radiation reaching the ground after 
the 1963 eruption of Mt. Agung, in 
Bali, was reduced in the Soviet Union 
by about 5 percent, a significant at- 
tenuation whose total effect on the 
global radiation balance is not clear. 

In contrast to these natural aero- 
sols, man has overwhelmed nature in 

certain parts of the world where in- 
dustrial smog and smoke have an evi- 
dent effect on the clarity of the atmos- 
phere. Observations in a few cities, 
such as Washington, D. C, and Uccle, 
Belgium, have documented the in- 
crease in turbidity and the decrease in 
solar radiation reaching the surface 
over the past few decades, even 
though progress has been made in the 
United States and Europe in reducing 
the production of smoke from coal- 
burning heat sources. 

An additional complication, a pos- 
sible effect of man-made contaminants 
in the atmosphere, is the observed 
reduction of the albedo of clouds due 
to contaminants absorbed in cloud 
droplets. This effect must also be 
taken into account in a complete cal- 
culation of the radiation budget and 
man's effects on it. 

Needed Scientific Activity 

In view of the uncertainties in the 
many factors involved in the radiation 
balance of the earth, and the possibil- 
ity that man is significantly affecting 
the radiation balance by his introduc- 
tion of aerosols and his increase in the 
COi; content, it is necessary to inten- 
sify our studies of the effects of these 
factors on the climate. 

Models — The key to such studies 
is the development of adequate clima- 
tological models on which experi- 
ments can be run. One would, for 
example, study the change in the 
average temperature in various re- 
gions of the globe for certain changes 
in the optical characteristics of the 
atmosphere resulting from aerosols 
and carbon dioxide. There are many 
feedbacks in this system, and the 
model should take as many as pos- 
sible into account. A major feedback, 
already referred to, is that due to 
changing ice and snow cover in the 
polar regions; another is due to 
change of cloud cover; the two prob- 
ably react in the opposite direction to 
a change in average temperature. 

Since the oceans are important in 
the long-term heat balance of the sys- 
tem, a climatic model must certainly 
include oceanic circulations, even 
though they are largely secondary to 
atmospheric circulations in the sense 
that the atmosphere drives the sur- 
face currents. Progress in modeling 
oceanic circulation has been made in 
a number of places, notably the Geo- 
physical Fluid Dynamics Laboratory 
of NOAA, NCAR, Florida State Uni- 
versity, and The RAND Corporation. 
The challenge, eventually, will be to 
combine the atmospheric and oceanic 
circulations in one model. 

Monitoring — It is not sufficient to 
develop a theory without being aware 
of changes actually taking place in the 
real atmosphere. For this reason it 
will be necessary to continue to moni- 
tor the climate, as is being done in a 
number of stations throughout the 
world. In addition to the usual pa- 
rameters of temperature, wind, and 
precipitation, the composition of the 
atmosphere and its turbidity need to 
be monitored better than they are 
now. This is not a simple task, since 
quantitative measurements of trace 
gases require fairly elaborate tech- 
niques, while measurements that de- 
scribe the aerosol content of the at- 
mosphere should provide information 
on the optical properties of these 
aerosols as well as their concentration. 
It is necessary to know how these 
aerosols affect incoming solar radia- 
tion and outgoing infrared radiation. 
This has not been done adequately, 
except on a few occasions using spe- 
cial equipment. 

Satellites have been useful in many 
ways in obtaining new information 
about the global atmosphere, and they 
can contribute significantly to the 
monitoring task. Except for cloud 
cover, however, observations to date 
have not been sufficiently quantita- 
tive. Cloud cover can and should be 
monitored by satellites. Satellites can 
also monitor snow and ice cover, al- 
though there is a problem during the 
polar night when pictures cannot be 
taken in the usual manner. This situa- 



tion is improving rapidly, since the 
High Resolution Infrared Radiom- 
eters, of the type used on the Nimbus- 
4 and ITOS-1 satellites, can obtain 
pictures by day or night and even 
provide an indication of the heights of 
cloud tops. Nimbus-F, scheduled for 
launching by the National Aeronau- 
tics and Space Administration 
(NASA) in 1974, may carry an ab- 
solutely calibrated radiation experi- 
ment that could mark the beginning 
of direct quantitative measures of the 
total heat budget of the earth. Mea- 
surements of lower atmospheric com- 
position, or pollution, from satellites 
have been proposed, but at this time 
they seem to be further in the future. 
Ozone, a trace gas found mostly in 
the stratosphere and upper tropo- 
sphere, has been measured, but this 
component may be of minor concern 
in the present context. 

A Perspeciive on Man-Made Pollu- 
tion — The possible change in the 
radiative characteristics of the upper 
atmosphere due to rockets can prob- 
ably be dismissed, because even ex- 
treme assumptions about numbers of 
Saturn-class rockets being launched 
lead to negligible changes. The con- 
tribution of jets to water vapor and 
aerosols in the stratosphere may also 
be trivial. Recent studies by the Na- 
tional Academy of Sciences, by Ma- 
nabe and Weatherald, and by others 
strongly suggest that it is. Contrails 
are likely to have a climatic influence 
only when they trigger the formation 
of extensive bands of cirrus cloud 

which mature, with the passage of 
time, to a sufficient optical depth in 
the infrared to produce either signifi- 
cant blanketing or reduction of in- 
coming visible solar radiation. 

One cannot say for certain that, on 
the occasions when jet-airplane con- 
trails produce cirrus clouds, the cirrus 
clouds would not have formed natu- 
rally. But there are many occasions, 
some lasting for several days, when 
major portions of the United States 
are crisscrossed by jet-airplane con- 
trails that do not dissipate, but instead 
spread out until major fractions of the 
sky are covered by thin cirrus of suf- 
ficient intensity to be of radiative sig- 
nificance. What needs to be done 
is to conduct quantitative studies, in 
selected areas of the earth, of the 
radiative losses to space that occur 
with and without cirrus clouds. Then 
there needs to be a rather careful ex- 
amination of the degree to which 
these cirrus can be artificially trig- 
gered. The stability of the large-scale 
circulation is an extremely important 
matter. We know that large trough 
developments occur in the 300-milli- 
bar circulation, particularly in the late 
winter and spring seasons, in a way 
that is difficult if not impossible to 
predict. It is quite conceivable that 
cirrus cloud formations at high lati- 
tudes over warm sources, as over the 
Gulf of Alaska, may be important in 
this regard. 

The atmosphere-ocean system de- 
pends on the heat available to run it, 
and this is the result of a delicate bal- 

ance between heat received from the 
sun and re-radiated to space. There 
are ways to disturb this balance, and 
the ice ages of the past are proof that 
nature sometimes does, in fact, alter 
it. Man might do the same, and this 
possibility deserves the most careful 
study. There has been much hand- 
waving of late by "prophets of doom." 
While virtually none of these people 
is a scientist, atmospheric scientists 
have not been able to make convinc- 
ing rebuttals so far. 

The earth actually has a remarkably 
stable life-support system, and man 
is unlikely to be able to move it far 
from its equilibrium. To mention a 
few examples: Aerosols, of the sort 
that man or nature creates, only re- 
main in the atmosphere for about a 
week on the average. Thus, indus- 
trial pollution in the United States 
hardly has time to reach Europe be- 
fore it is washed from the air. Further- 
more, natural sources of contamina- 
tion from vegetation, volcanoes, the 
oceans, and the deserts still far out- 
weigh all of man's contributions, 
taken on a global scale. With respect 
to the balance built into our highly 
variable clouds, an increase in mean 
temperature would probably cause an 
increase in moisture and cloudiness, 
which in turn would reflect more solar 
radiation back to space. Such a nega- 
tive feedback, forcing the situation 
back to equilibrium, is only one of 
several mechanisms that we are be- 
ginning to identify in the complex 
atmosphere-ocean system. 

Climatic Change and the Effects of Civilization 

A worldwide climatic change has 
been taking place for the past decade 
or two. Its reality has been estab- 
lished by scientists of the United 
States, the Soviet Union, and 

The climatic amelioration that took 

place between the late 1800's and 
1940 has ended, and the mean tem- 
perature of the earth appears to have 
fallen since the middle of the present 
century. (See Figure III-8) Some 
dramatic environmental changes have 
followed — e.g., the return of mid- 
summer frosts in the upper Midwest, 

record cold autumns in Ohio, rising 
lake levels in East Africa, and mas- 
sive encroachment of sea-ice on the 
north shore of Iceland. With this 
change, the circulation patterns of 
the atmosphere also appear to have 
changed. Any such changes on an 
earth that is straining its capacity to 




The observed temperature variation of the northern hemisphere has here been 
corrected (see solid line) for the time lag of the ocean-atmosphere-soil system and 
the system's response to factors that cause the variation of temperature. A half- 
response-time of ten years was used. The broken line is the smoothed curve of 
Figure 111-4 repeated for comparison. 

feed the human population are sig- 

The theory of climate is so poorly 
developed that we cannot predict ac- 
curately whether the climatic trend 
will continue, or how the distribution 
of rainfall and frost will change if it 
does. Clearly this knowledge is a 
national and international need of 
high priority. Clearly, too, we must 
know whether any or all of the recent 
fluctuation in climate is man-made, 
and whether it can be man-controlled. 
Lacking an adequate theoretical basis 
for prediction, we can only look to 
the past to see what kinds of changes 
are possible, with what rapidity they 
may occur, and what the causal fac- 
tors might have been. 

Basic Balances 

Although numerical models of the 
atmospheric circulation are still too 
crude to simulate the climatic pattern 
within an error small enough to be 
less than the occasional ecologically 
significant variations, certain basic 

relations may be identified that can 
yield information on some of the fac- 
tors important to climatic change. 

Ultimately the sun drives the at- 
mosphere. The fact that we have 
water in gaseous, liquid, and solid 
states in the proportions we do is de- 
pendent on our distance from the sun 
and the fraction of the sunlight that 
the earth absorbs. In the long run, 
there must be the same amount of 
heat re-radiated to space from the 
atmosphere as is absorbed by the 

The receipt of solar radiation occurs 
on the cross-sectional area of the 
earth, but re-radiation takes place 
from four times as large an area — 
i.e., the entire area of the globe. Thus, 

SttR 2 (1 - a) = 4ttR- I, 
where 5 is the solar constant, 

R is the radius of the earth, 
a is the albedo, or "reflec- 
tivity," of the earth, 
and It is the mean outward in- 
frared radiation flux from 
the earth to space. 

Satellite data show that It is fairly 
uniform over the earth, on the annual 
average, and that the albedo of the 
earth is such that the above equation 
is approximately balanced. The out- 
ward radiation measured from space 
is smaller on the average than that 
emitted by the earth's surface, so that 

S (1 - a) = 4 (to-To 4 - AI) 
where t is the emissivity of the 
earth's surface, 

a is the Stef an-Boltzman con- 

To is the surface tempera- 
ture of the earth, 

AI is the difference between 
the heat radiated upward 
by the earth's surface and 
that leaving the top of the 
atmosphere for space — 
the "greenhouse effect," 
and the overbar on c<jT,, 4 indi- 
cates an average over the 
whole surface of the 

The above equation is crude, but 
it provides an insight into the factors 
that might affect the general tempera- 
ture state of the earth. Clearly, fluc- 
tuations in solar intensity, the fraction 
of incoming solar radiation that is 
"reflected" or scattered away before 
reaching the ground, and the "green- 
house effect" are the major causes 
of variation in the mean temperature 
of the earth. A change of one or 2 
percent in any one of these variables 
is enough to produce a significant cli- 
matic change, yet none of them is 
known with this accuracy except per- 
haps the solar intensity. 

Albedo — The temperature of the 
earth is most sensitively dependent on 
the albedo of the earth-atmosphere 
system — an increase of a few per- 
cent would cool the earth to ice-age 
temperatures. This variable — reflec- 
tivity — can be measured by mete- 
orological satellites, but not yet with 
sufficient accuracy. The albedo is also 
a variable that can be changed by 
human activity, primarily by changes 
in the transparency of the atmosphere 



resulting from particulate pollution. 
According to Angstrom, a 7 percent 
increase in the turbidity of the atmos- 
phere will produce a one percent 
change in albedo and a 1 -centigrade 
change in world mean temperature. 

The "Greenhouse Effect" — The 
other variable that can be changed by 
human activity is the "greenhouse 
effect." This depends on such things 
as the water-vapor content of the air, 
dustiness, cloudiness, and, especially, 
the carbon dioxide content. The car- 
bon dioxide content of the atmosphere 
has risen 11 percent or so in the past 
century, and it is widely believed that 
the rise is due to human activity in the 
burning of fossil fuels and greater 
exposure of soil humus and the like to 
oxidation. (See Figure I1I-9) 

In times past, changes in vegetation 
and land distribution and elevation 
affected the earth's albedo, as did 
short-time changes in cloud and snow 
cover. Volcanic activity, then as now, 
produced a variable input of particu- 
lates to the atmosphere, as did blow- 
ing dust from desert areas — which 
in turn affected the albedo. (See Figure 
111-10) In earlier times, the distribu- 
tion of land and sea, volcanic activitv, 
the elevation of the land and nature 
of the biota probably affected the 
magnitude of the greenhouse effect. 
The sun's intensity may also have 
varied, though there is no evidence. 
In addition, there are complex feed- 
back mechanisms, such as additional 
water vapor in the air at higher tem- 
peratures, that increase the green- 
house effect which in turn increases 
the water-vapor content still more. 
While increased volcanic activity 
makes the atmosphere more turbid 
and thus tends to depress the tem- 
perature, it may also contribute to the 
greenhouse effect and thus tend in 
part to counteract the temperature 
effect. The complete equation for re- 
lating these effects is not known, but 
it appears that the effect of turbidity 
on the greenhouse effect is only about 
10 percent of its effect on the albedo. 

We do know, however, that there 
is something new under the sun — 
a population of humans sufficiently 

numerous to modify the whole albedo 
of the earth and the magnitude of the 
greenhouse effect through their sheer 


In this graph, the mean observed temperature variation for the northern hemisphere 
has been adjusted for the time lag shown in Figure 111-8 and for the warming effect 
of carbon dioxide (CO.). It can be seen that the increase of variation due to the 
"greenhouse effect" of CO. is small compared with the variation of temperature 
corrected for system lag. (Compare values of Figures 111-8 with 111-9) Only about 
3 percent of the variance can be explained by the presence of CO.. 










- 4 



, 1 , 


, 1 , 


















The mean observed temperature variation for the northern hemisphere has here 
been adjusted for the time lag of the system, the warming effect of CO,, and the 
effect of both stratospheric (volcanic) and tropospheric dust. The dust effect ex- 
plains 80% of the variance of the adjusted temperature, with 63% due to strato- 
spheric and 17% due to tropospheric dust. The resulting curve shows what tempera- 
tures would be observed under conditions of direct solar radiation with cloudless 
skies, although some residual errors remain. (Compare Figures III-4, 8, and 9) 



numbers and control of energy. Thus 
man can, and probably has, modified 
the climate of the earth. 

The Climates of the 
Past Century 

From late in the nineteenth century 
until the middle of the twentieth, the 
mean temperature of the earth rose. 
During this time the carbon dioxide 
content of the atmosphere rose enough 
to explain the global temperature rise 
— apparently the first global climatic 
modification due to man. At the same 
time, local production of particulate 
pollution was starting to increase 
rapidly due to mechanization and 
industrialization. By the middle of the 
twentieth century, these trends — 
amplified by a general population ex- 
plosion and a renewal of volcanic 
activity — increased the worldwide 
particulate load of the atmosphere to 
the point where the effect of these 
particulates on the global albedo more 
than compensated for the carbon di- 
oxide increase and world temperatures 
began to fall. 

The total magnitude of these 
changes in world or hemispheric mean 
temperature is not impressive — a 
fraction of a degree. However, the 
difference between glacial and non- 
glacial climates is only a few degrees 
on the worldwide average. 

Actually, it is not the mean tem- 
perature of the earth that is impor- 
tant, but rather the circulation pattern 
of the atmosphere. This is stronglv 
dependent on the temperature differ- 
ence from the tropics to the poles. 
The same man-modifiable factors that 
affect the mean temperature of the 
globe-albedo and carbon dioxide — 
even if applied uniformly over the 
globe — will have the effect of chang- 
ing the meridional temperature gradi- 
ent and thus the circulation pattern 
and resultant weather pattern. It is 
this change of pattern that is of prime 
concern. Dzeerdzeerski in the Soviet 
Union, Kutzbach in the United States, 

and Lamb in England have all pro- 
duced different kinds of evidence that 
the circulation patterns have changed 
in the past two decades. In turn, the 
local climates show change — some 
regions wetter, some drier, some 
colder, some warmer — though some 
remain unchanged. 

The most striking changes have 
been where the effects of the change 
are cumulative, such as the slightly 
changed balance between evaporation 
and precipitation in East Africa which 
has caused the level of great lakes 
such as Victoria to rise markedly. 
Another case is the balance between 
ice wastage and production that has 
changed enough in the last decade to 
bring drift-ice to the Icelandic shores 
to an extent unknown for a century. 
It would be most useful to know what 
the cumulative ecological effect of 
these local or regional changes might 
be. Since biological selection in re- 
sponse to environmental changes usu- 
ally requires a number of generations 
to show the total effect of the change, 
it is probably too soon to know the 
total ecological impact of the present 
change. Here we can only look to the 
past to see what is possible. 

The Lesson of History 

The advent of radiocarbon dating 
has given a new dimension to the 
study of the variety of paleobotany 
known as palynology. It is now pos- 
sible to put an absolute time-scale on 
the record of environmental change 
contained in the pollen assemblages 
recovered from bogs and lake sedi- 
ments. In the context of the present 
discussion, the most startling result 
is the rapidity with which major envi- 
ronmental changes have taken place. 

If we examine the most carefully 
studied and best-dated pollen profiles, 
we find that the pollen frequencies 
often show a quasi-exponential change 
from, for example, an assemblage 
that might indicate boreal forest to an 
assemblage typical of mixed hard- 

woods. Calling the time required for 
half the change to occur the half-life 
of the transition, it appears that such 
major changes in vegetation may have 
half-lives of a couple of centuries or 
less. (Greater specificity must await 
analyses with much finer time-resolu- 
tion than has been generally used.) 
Since the plants integrate the climate, 
the half-life of the climatic change 
must be shorter still! 

With the agricultural land use of 
the world still reflecting the climatic 
pattern almost as closely as the native 
vegetation did, a major shift in cli- 
matic pattern within a century could 
be disastrous. Unlike the past, migra- 
tion into open lands is not possible: 
there are none, and forcible acquisi- 
tion of agricultural land with a favor- 
able climate is not acceptable. Only 
in a few nations would a combination 
of regional variety and advanced tech- 
nology allow an accommodation to a 
major climatic change. 

What We Need To Know 

Faced with the possibility that we 
are well into a climatic change of ap- 
preciable magnitude, of man's mak- 
ing, there appear a number of ques- 
tions to which answers are urgently 

Since in the past there have been 
rapid changes in climate due to natural 
causes, such as major changes in vol- 
canic activity, what is the probability 
of increased volcanism in the next few 
decades adding to the pollution of the 
atmosphere made by man and thus 
speeding up the present climatic 

How far will the present climatic 
change go? It appears that the change 
from a glacial climate to a nonglacial 
climate occurred with great rapidity. 
Would the opposite change occur as 
fast? What chance is there, on a rela- 
tively short time-scale, to control the 
sources of turbidity? 



If we have reverted to the climate 
characteristic of the early 1800's, what 
displacements in the world agricul- 
tural pattern will occur in the next 

The answers to these and a host of 
related questions will require a much 
more sophisticated knowledge of cli- 
mate and the man-environment sys- 
tem than we now possess. Time is 

short and the challenge to science is 

Environmental Change in Arid America 

One of the great controversies in 
ice-age paleoecology is how to explain 
the virtually simultaneous coast-to- 
coast extinction of large mammals in 
North America around 11,000 years 
ago. We know, for example, that ele- 
phants once existed even in the pres- 
ently arid lands of the West. Paleon- 
tologists have commonly recovered 
the bones of Mammuthus columbi in 
arid America, along with bones of 
other extinct large mammals, includ- 
ing horses, camels of two extinct 
genera, extinct bison, and ground 

Did the climate change suddenly? 
Fossil elephants and the like inevitably 
provoke visions of a wetter climate 
and a more productive ecosystem 
than today's arid land will support. 
But the fossil-pollen record has indi- 
cated otherwise. 

Fossil Pollen and 
Other Forms of Evidence 

The technique of fossil-pollen anal- 
ysis has proved of unique value in 
determining what the vegetation and, 
by implication, the primary produc- 
tivity of arid America must have been 
during the period when this region, 
along with the rest of the continent, 
supported large numbers of native 
large mammals. 

Pollen is a very popular fossil be- 
cause it is produced in quantity by 
certain plants and, thanks to its acid- 
resistant outer wall or shell, is pre- 
served in many types of sediments. 
Unlike fossils of larger size, pollen is 
usually dispersed evenly throughout 

a deposit rather than aggregated in 
one or a few distinct beds. Under 
relatively uniform sedimentation, as 
determined by closely spaced radio- 
carbon dates, one can estimate the 
intensity of the local pollen rain 
through time, as Davis has done in a 
study of vegetation history at Rogers 
Lake, Connecticut. Different vegeta- 
tion zones shed different amounts of 
pollen — a tundra much less than a 
forest, for example. This is revealed 
by the fossil pollen extracted through 
hydrofluoric-acid treatment of lake 

In many deposits, especially in arid 
lands, absolute values cannot be esti- 
mated. The relative amounts of the 
dominant pollen types in a deposit can 
be compared with the pollen content 
of sediments presently being deposited 
in areas of natural vegetation. Literal 
interpretation of the relative pollen 
percentage cannot be made — i.e., 10 
percent pine pollen does not mean 
that 10 percent of the trees in the 
stand were pines. But the pollen 
spectrum of all types identified in a 
fossil count can be matched, through 
computer programs or simple direct 
comparison, with the pollen rain of 
modern natural communities. This 
method works especially well in west- 
ern United States, where there are 
extensive areas of relatively undis- 
turbed vegetation. In this way, any 
major or increasing number of minor 
changes in vegetation through time 
can be detected. 

As opportunity allows, the fossil- 
pollen record can be compared with 
other forms of evidence. Macrofossil 
remains of plants, including seeds and 

leaves, are found in certain lake muds. 
They have been reported in remark- 
able abundance in ancient wood-rat 
middens of certain desert regions by 
Wells. The oldest rat's nests studied 
by Wells are over 30,000 years in age, 
essentially older than can be deter- 
mined by the radiocarbon method. 

The Climatic Record of 
Western America 

The fossil record of radiocarbon- 
dated deposits covering the last 30,000 
years in western America indicates an 
initial cool, dry period becoming 
colder and wetter by 20,000 to 16,000 
years ago. At this time, there were 
ponderosa-pine parkland and pinyon- 
juniper woodland at elevations about 
3,300 feet below their present lower 
limits on western mountains. The fate 
of prairie, both short and tall grass- 
land, is unknown. The present prairie 
region was occupied by spruce in the 
north and pine in the south. This 
suggests that arid America, like other 
regions, was affected by the late 
Pleistocene cooling associated with ice 
advance over Canada. 

Around 12,000 years ago the cli- 
mate changed rather rapidly, becom- 
ing warmer and drier, until conditions 
were only slightly cooler and wetter 
than now. Modern vegetation zones 
have occupied their present positions, 
with minor fluctuations, continuously 
for the last 8,000 years. 

Thus, the record shows that the 
environment of western America in- 
habited by mammoth, camels, native 



horses, and bison at the time of their 
extinction 11,000 years ago was not 
vastly different from what we know 
at present. Why, then, did the ani- 
mals die? Fossil pollen and other 
evidence from the radiocarbon dating 
of extinct Pleistocene faunas seem to 
indicate that no environmental defects 
will explain this phenomenon. One 
must look elsewhere. And the only 
new variable in the American ecosys- 
tem of the late-glacial period is the 
arrival of skilled Stone Age hunters. 
These events of thousands of years 
ago have major implications for mod- 
ern-day range management. 

Implications for Modern Range 

In part, the concept of the West as 
a "desert" is based on the fact that 
grass production is indeed quite low. 
But the dominant woody plants found 
across the one million square miles of 
western America — the creosote bush, 
sagebrush, cactus, and mesquite — do 
yield large amounts of plant dry- 
matter annually. Primary productiv- 
ity data on these western shrub com- 
munities are less abundant than one 
might wish. Nevertheless, such data 
as do exist indicate that shrub com- 
munities in southern Arizona may 
yield 1,400 kilograms per hectare a 
year, considerably more than adjacent 
grassland under the same climate (12 
inches of precipitation annually). 

Observers have overlooked or writ- 
ten off this annual production, per- 
haps because it is often avoided by 
domestic livestock. Indeed, fifty years 
of range management in the West has 
been aimed at destroying the woody 
plants to make way for forage more 
palatable to cattle. The effort has 
been singularly futile and should be 

The Future of Western Meat-Pro- 
duction — The dilemma faced by the 
range industry in arid America is that 
beef can be produced faster, more 
efficiently, and at less expense in the 
southeast or in feedlots. If this fact 
is accepted, one can make a case for 
keeping large areas of arid America 
as they are, at least until much more 
is known about primary production of 
the natural communities and until 
some value for Western scenery can 
be agreed upon. Some large, wealthy 
ranchers have already recognized this 
and have disposed of their cattle. 
More should be encouraged to do so. 
If a meat-producing industry is to be 
established in the marginal cattle 
lands in the West, it should be based 
on new domestic species, animals that 
are better adapted to arid environ- 
ments than cattle and that are adapted 
for efficient browsing rather than 

Potential New Domesticates — One 
obvious source for potential new 

domesticates is Africa, where arid 
ranges that barely sustain cattle are 
supporting thrifty herds of wilde- 
beest, kongoni, zebra, giraffe, and 
kudu. In size and general ecology, the 
African species bear at least general 
resemblance to the extinct Pleistocene 
fauna of the Americas. They did not 
invade the New World during the ice 
ages because they failed to range far 
enough north to be able to cross 
the Bering Bridge, the only natural 
method of intercontinental exchange 
open to large herbivores. Many natu- 
ral faunal exchanges of arctic-adapted 
herbivores did occur over the Bering 
Bridge in the Pleistocene. Some, but 
not all, of the invaders re-adapted to 
warmer climates of the lower latitudes. 

In summary: (a) Studies of fossil 
pollen and other evidence of the last 
30,000 years reveal no environmental 
defects that might explain the extinc- 
tion of many species of native New 
World large mammals 11,000 years 
ago. (b) The only known environ- 
mental upset at the time of large ani- 
mal extinction was the arrival of Early 
Man. (c) The cattle industry of west- 
ern America is marginal, being main- 
tained for reasons of its mystique, not 
for its economics, (d) If a more pro- 
ductive use of the western range is 
desirable, experiments with other 
species of large mammals should be 
begun now, as indeed they have been 
on certain ranches in Texas, New 
Mexico, Mexico, and Brazil. 






Oceanic Circulation and the Role of the Atmosphere 

The ocean circulation is one of the 
primary factors in the heat budget of 
the world. The circulation is impor- 
tant not only internally to the ocean 
but also to the overlying atmosphere 
and, indeed, to the climate of the 
entire earth. Together the sea and 
the air make a huge thermal engine, 
and it is not possible to understand 
either without having some compre- 
hension of the other. Any studies of 
ocean circulation must inevitably in- 
volve this coupling with the atmos- 

The Present State of Understanding 

Studies of ocean circulation have 
progressed a long way in the past 
fifty years. Measurements of the 
characteristics of the ocean at great 
depths have produced at least a gen- 
eral sense of the major deep circula- 
tions. And extensive theoretical de- 
velopments over the same period have 
given us some glimmering as to why 
the circulations are what they appear 
to be. 

Ocean Variability — Both the ob- 
servational and theoretical studies 
have dealt mostly with a steady-state 
ocean or the long-term mean of an 
ocean. (See Figure IV-1) During the 
past few years, however, some data 
have been accumulated that allow us 
to speculate a bit about the variability 
of the ocean. Like mean circulation, 
variability is closely coupled to the 
atmosphere, and variations in ocean 
circulation may lead to, or stem from, 
variations in atmospheric phenomena. 
For example, one of the critical parts 






I I £ 


The figure shows sea-surface temperatures represented as deviations from global 
average values of the sea-surface temperature. The global average value for each 
5° latitude band is marked at the right-hand edge of the world map. Note the extent 
of the cold equatorial water in the Pacific (from the coast of South America westward 
halfway across the Pacific) and the warm water west and north of the United 



of the heat engine is the Norwegian 
Sea, an area where warm saline sur- 
face water from the Gulf Stream is 
cooled by contact with the atmos- 
phere, made dense, and returned to 
the open Atlantic as dense deep water 
in such quantity as to create a recog- 
nizable subsurface layer extending 
throughout the Atlantic, Antarctic, 
Indian, and Pacific oceans. In this 
case, the power to drive this thermo- 
haline engine comes from heat ex- 
change with the atmosphere. 

Warming of the surface waters in 
low latitudes and cooling in high lati- 
tudes creates easily recognizable ef- 
fects on the circulation of the ocean. 
The effect of this exchange on the 
atmosphere is equally important, not 
just locally — in that the coast of 
Norway remains ice-free — but also 
in the larger sense of general effects 
on the world atmospheric climate. 
The budget of this heat exchange and 
the details of its various expenditures 
must be learned if the earth's climate 
is to be understood. Seasonal and 
nonseasonal variations of the heat 
exchange, and their causes and ef- 
fects, must be studied. 

The Gulf Stream is both a cause 
and an effect of this exchange. It 
would exist in any case as a conse- 
quence of the wind-driven circulation 
in the trade-wind and westerlies areas, 
as do, in a weaker form, its South 
Atlantic, North Pacific, and South 
Pacific counterparts. (The heat and 
water sink of the far North Atlantic 
requires a vaster flow in the Gulf 
Stream than in the other western 
boundary currents.) But variations in 
the strength of the Gulf Stream may 
be either causes or consequences of 
variations in heat exchange in the 
Norwegian Sea. Although the effects 
of these variations may be severely 
damped by the time the waters enter 
the immense reservoir of the abyssal 
ocean, there is no certainty that their 
effects on the far reaches of the ocean 
are negligible. 

Some of the most interesting varia- 
tions yet observed in the ocean are 

in the North Pacific, where bodies of 
surface water thousands of miles in 
diameter remain warmer or colder 
than their seasonal means for periods 
ranging from three months to over a 
year. Such features seem to be char- 
acteristic of the North Pacific. Thus, 
a typical map of surface temperature 
is not one that is very near the norm 
everywhere, with many small highs 
and lows; instead, the whole North 
Pacific may consist of three to five 
large areas of deviant temperature. 
Such features have been noted only 
in the past fifteen years. They are 
beginning to receive the attention of 
meteorologists, as well as oceanogra- 
phers, since their consequences for 
the atmospheric climate cannot be 
discounted in attempting to under- 
stand and predict the world's weather. 

Prediction — Our present under- 
standing of the ocean is barely suffi- 
cient to account for the major cir- 
culations in a general way. Some 
preliminary attempts are now being 
made to predict specific features of 
ocean behavior, most of them being 
based on the persistence of deviations 
from the mean. That is, if an area 
shows an abnormally high surface 
temperature in one month, this anom- 
aly is apt to endure or persist for 
several months more and to diminish 
to the norm slowly. Strictly speaking, 
this is not prediction but merely the 
extrapolation of a present feature. 
More ambitious predictions are being 
contemplated, but they are still in 
very early stages. 

Advances in Instrumentation 

Devices to measure ocean currents 
have improved greatly over the past 
ten years. They have been used to 
monitor changes in position of the 
Gulf Stream, to measure its deep 
flow, and to investigate some of the 
principal inferences about deep cir- 
culation in the Pacific and Atlantic 
oceans. Considerable improvement 
has also been achieved in instruments 
for measuring water characteristics. 

Moored buoys of various kinds 
have been developed for deep-water 
use within the past decade. They are 
used for monitoring certain character- 
istics of the ocean and atmosphere, 
including wind, air, and sea tempera- 
ture, subsurface temperature, waves, 
and, possibly, water velocity. These 
measurements can either be recorded 
and recovered by vessels or trans- 
mitted immediately by radio to ap- 
propriate shore bases. 

The future may see interrogation 
and retransmission of signals by satel- 
lite. The advantages of such monitor- 
ing stations would include relatively 
inexpensive operation (compared to 
weather ships) and the ability to 
gather data from regions that are out- 
side normal shipping lanes but may 
be extremely pertinent to ocean and 
weather studies. 

Deficiencies in the Data Base 

The data base for study of the 
ocean consists of measurements of 
water characteristics in various loca- 
tions and depths at different times 
and measurements of currents, waves, 
tides, and ocean depths. In some 
areas and some seasons, this data 
base is adequate for a long-term mean 
to be established; it is not continuous 
enough in time, however, to allow for 
adequate study of variations from the 
long-term mean. In other areas and 
seasons, the data base barely exists. 
High-latitude areas in winter have 
hardly been explored. Our knowl- 
edge of the deep arctic is extremely 
limited. Some few winter data are 
available from the antarctic region. 
The deeper parts of the ocean may be 
better represented in the present data 
base than the surface parts, since the 
deeper parts show less time-variation 
than the upper layers. 

Other parts of the data base in- 
volved in investigating ocean circu- 
lation include atmospheric-pressure 
observations and wind measurements, 
air temperature and the like. These, 



too, are limited both in time and 
space. Major shipping lanes are fairly 
well measured in many seasons. 
Among the more systematically meas- 
ured areas are the North Sea, the 
California Current system, and the 
Kuroshio Current. But data from the 
areas that ships avoid, either because 
of bad weather conditions or because 
they do not represent profitable ship 
routes, are generally sparse. Not only 
is the arctic poorly represented even 
with atmospheric information, but 
also the South Pacific and large parts 
of the South Atlantic. Very few areas 
in the world are represented by a 
data base sufficient to allow for sea- 
sonal and nonseasonal variations. 
Numerical models of the ocean are 
also still in an early stage of develop- 

What is Needed 

A proper understanding of air-sea 
interchange and of deep flow are 
among the most urgent tasks of oce- 
anic circulation research. We need to 
determine which data are critical, ob- 
tain them, and use them in mathe- 
matical modeling of the ocean. Topics 
of practical importance to man, re- 
quiring urgent study, include fisheries 
production in the world ocean; this is 
related to ocean circulation, since the 
latter controls the availability of plant 

Better understanding of the Arctic 
Ocean is crucial to proper evaluation 
of its possibilities as a commercial 
route for surface vessels or subma- 

rines. Better knowledge of the deep 
circulation and the rates of exchange 
of ocean water — both from the sur- 
face to the bottom and from the 
deeper parts of one ocean to the 
deeper parts of another — is particu- 
larly important in the light of new 
concerns over contamination and pol- 
lution. While the ocean can act as a 
reservoir to absorb, contain, and re- 
duce much of the effluent now being 
produced, it is not of infinite capacity 
nor can it contain materials indefi- 
nitely without bringing them back 
onto the surface. 

Time-Scale — It is not possible to 
lay out a time-scale for many of the 
things that must be investigated. For 
the problem of describing the mean 
ocean, another ten or fifteen years 
might be sufficient. In that period of 
time, it would be feasible to collect 
the additional data needed without 
substantially expanding the facilities. 
In order to accomplish this, however, 
the various institutions capable of 
carrying out the requisite measure- 
ments would have to devote a greater 
part of their time to this subject — 
and this may not be desirable. 

Developing a data base to study 
the time-variable ocean is a different 
sort of problem. Since our under- 
standing of the nature of time-varia- 
tions is still in a primitive stage, we 
must first learn how to observe the 
phenomena and then begin a system- 
atic series of observations in the ap- 
propriate places. Progress has been 
made in learning how to do this from 
buoy deployments in the Pacific and 

Atlantic oceans. These are prelimi- 
nary, however, and must be greatly 
augmented before we can really un- 
derstand even the scale, much less 
the nature, of the anomalies being 
observed. Understanding of this kind 
usually advances step by step from 
one plateau to another, but the steps 
are highly irregular both as to height 
and duration, and a feasible time- 
scale cannot be estimated. 

Necessary Activity — On the one 
hand, the scale of the problems dis- 
cussed here suggests large-scale, 
large-area, heavily instrumented re- 
search carried out by teams of in- 
vestigators. On the other, the history 
of ocean circulation research has 
shown that some of the greatest con- 
tributions were made by individuals 
— e.g., Ekman transport, Stommel's 
westward intensification, Sverdrup 
transport. A balance is required be- 
tween large-scale programs compa- 
rable to the space program and indi- 
vidual small-scale projects. 

One of the first needs is to train 
people able to work on problems of 
both the ocean and the atmosphere. 
The two fields have been far too sepa- 
rated in most cases. People trained 
in mathematics and physics are avail- 
able, but the average student finds it 
difficult to acquire a working back- 
ground in both the oceanic and at- 
mospheric environment; indeed, many 
people trained in physics and mathe- 
matics have limited backgrounds in 
either environment, relying on theory 
without adequate knowledge of the 
structure of the two systems. 

On Predicting Ocean Circulation 

Nonspecialists tend to think of 
ocean circulation systems as being 
primarily a matter of geographical 
exploration. We are not going to dis- 
cover many new undercurrents, how- 

ever. Nor will simple-minded "moni- 
toring" of ocean currents teach us 
much. Twenty years of looking for 
— and not finding — relations be- 
tween changes in patterns of applied 

wind stress and the total transports 
of currents like the Gulf Stream 
where it passes through the Florida 
Straits warn us that the chain of 
cause and effect in the ocean is rather 



complicated and that the primary 
problem is to make more profound 
our understanding of the ocean as a 
hydrodynamical phenomenon. 

What We Know — and Don't Know 

It has been pointed out that there 
has been a really effective growth of 
understanding of ocean surface waves 
only in the last decade. And ocean 
surface waves are probably the most 
easily observable and dynamically 
linear of ocean phenomena. Internal 
waves and oceanic turbulence are not 
so easily observable, and treatments 
of these phenomena are a thin tissue 
of preliminary theory largely unsup- 
ported by observation. Studies limited 
to rather high-frequency phenomena 
actually represent the kind most 
nearly duplicable in the laboratory. 

There is a small body of theory 
concerning oceanic circulation, but it 
deals only with the climatological 
mean circulation. The role of medium- 
scale eddy processes in ocean circula- 
tion is completely unknown, although 
current measurements indicate that 
they can be very important — as, for 
example, they are in the general circu- 
lation of the atmosphere. A two- 
pronged development of mathematical 
modeling and fairly elaborate field in- 
vestigation is going to be necessary to 
develop much further our under- 
standing of the hydrodynamical in- 
teraction of these eddies and the 
mean circulation. (A working group 
of the Scientific Committee on Ocean 
Research of the International Coun- 
cil of Scientific Unions recommended 
a "Mid-Ocean Dynamics Experiment" 
(MODE).) Considering the three- 
dimensional detail of velocity struc- 
ture and its development in time that 
such a measurement program will 
entail, it seems clear that a major 
input from the engineering commu- 
nity will be needed.) 

Technological Limitations 

Oceanography is not presently 
competent technologically to tackle 

the tasks of measurement that are 
necessary in trying to tinravel the 
dynamical features of large-scale mo- 
tions. The difficulty is simply that one 
needs to map variables like velocity 
rather densely in large volumes (per- 
haps 2 miles deep and 300 miles on a 
horizontal side) for rather long pe- 
riods (perhaps a year) with sufficient 
accuracy that reliable statistics can be 
calculated for complicated functions 
like triple correlation products. Many 
different modes of motion are occur- 
ring simultaneously, and we need to 
be able to separate one mode from 
another in order to compute interac- 
tions. Therefore, a great variety of 
arrays of sensors need to be arranged 
in different configurations and on 
different scales for gathering the kind 
of data required from the ocean. 
Some test portions of the ocean will 
need to be heavily instrumented in a 
manner more sophisticated than pres- 
ent small-scale observational opera- 
tions can achieve. It is safe to say 
that solutions of problems of internal 
waves, the general circulation and 
eddy processes, and such important 
local processes as coastal upwelling 
are simply going to have to wait until 
major new instrumental arrays be- 
come available. 

There is a limit beyond which in- 
ferior technique cannot go. It needs 
to be made very clear what a helpless 
feeling it is to be on a slow-moving 
ship, with a few traditional measuring 
techniques like water bottles and 
pingers on hand, trying to keep track 
of a variable phenomenon like an 
eddy that won't hold its shape. A 
faint idea of the elusiveness of the 
phenomenon can be conveyed to any- 
one who has tried to pick up mercury 
with his fingers or who has watched 
a teacher trying to keep track of her 
pupils on an outing to a public park. 
But the ocean environment is so much 
larger, so much harder to see, that we 
don't bring many of "our children" 
home. Measurement in large-scale 
ocean physics illustrates this limit 
very well. Further theoretical devel- 
opment is simply going to have to 
wait upon adequate measurement 

technique. The theoretical difficulties 
are not serious; mathematical model- 
ing can be worked by machine once 
sufficient insight has been gained as 
to what is actually going on in the 

The Need for Mathematical Models 

Some advances in climate control, 
pollution evaluation, and numerical 
weather forecasting might be achieved 
simply by extending present land- 
based meteorological networks into 
the ocean by means of buoys. Per- 
haps a superficial knowledge of tem- 
perature on a coarse grid in the upper 
100 meters of the ocean will be useful 
to meteorologists. But this will not 
provide the basis for a quantitative, 
rational, ocean-prediction system. 

In order to be able to predict the 
mechanism of the ocean it is neces- 
sary to have numerical-mathematical 
models that have been verified by 
comparison with actually observed 
case histories of oceanic motion. Be- 
cause there are several modes of such 
motion, these experiments or com- 
parisons have to be made on several 
different scales. But to date they have 
not been made. They are beyond our 
technical means. 

Actually, it is too early to try to 
design an oceanic monitoring system; 
some experimental measuring systems 
are needed first — aimed squarely at 
providing input for mathematical 
numerical modeling of the basic hy- 
drodynamical processes at work. Suc- 
cessfully tested models could evolve 
into successful prediction schemes. 
If sufficient resources were mus- 
tered to start a good crew of instru- 
ment engineers on a sample program 
of measurement, sufficient progress 
might be made in carrying out one 
sample comparison of theory and 
observation to catalyze progress on 
the other necessary experiments. One 
has the feeling that the science is 
locked in a dead-center position, and 
that a mighty shove is going to be 
needed to get it rolling. 



Hydrodynamic Modeling of Ocean Systems 

Waves and currents in the ocean 
can be organized into many different 
categories depending on horizontal 
dimension and the time-scale of vari- 
ability. Some of these categories are 
strongly interconnected, others al- 
most independent. In Figure IV-2 an 
attempt is made at classification, along 
with an indication of the principal 
ways in which each phenomenon has 
an impact on human activities. (The 
emphasis in this outline is on ocean- 
circulation phenomena; surface waves, 
tides, and storm tides are treated only 
briefly, although thev are admittedly 
important subjects from the stand- 
point of practical disaster-warning 

Present Status 

Wind Waves and Tidal Waves — 
The numerical models presently used 
to predict surface waves are essentially 
refinements of earlier operational 

models developed by the U.S. Navy; 
they have proved valuable to ship- 
ping. New computer models, how- 
ever, allow a much more detailed in- 
corporation of the latest experimental 
and theoretical advances in the study 
of wave generation. Furthermore, or- 
biting satellites may soon be able to 
provide a good synoptic picture of the 
surface sea state all over the globe. 
Given an accurate weather forecast, 
computer models would then be able 
to predict future sea states. Indeed, 
it may turn out that the ultimate limi- 
tation to wave forecasting will involve 
the accuracy of the weather forecast 
rather than the wave-prediction 
model itself. 

Operational models for predicting 
tidal waves (tsunamis) have been de- 
veloped for the Pacific, where the 
danger of earthquakes is greatest. As 
soon as the epicenter of an earth- 
quake is located by seismographs, the 
model can predict the time a tidal 

wave will arrive. Such warning sys- 
tems are being developed by the 
National Oceanic and Atmospheric 
Administration (NOAA) and the 
Japanese Meteorological Agency. 

Storm Surges and Tides — Most of 
the research in developing numerical 
models to predict storm tides has 
been carried out in Europe, in con- 
nection with flooding in the North 
Sea area. In the United States, storm 
surges caused by hurricanes ap- 
proaching the Gulf Coast have gener- 
ated the most interest. The results of 
these model studies appear promising. 
Graphs and charts based on the 
model calculations may be used by 
Weather Service forecasters in mak- 
ing flood warnings. The models will 
also be useful in the engineering de- 
sign of harbor flood-walls and levees. 
In time, computer models will prob- 
ably replace the expensive and cum- 
bersome laboratory models of harbors 
now used by coastal engineers. 







Surface Waves 
shore erosion, 
offshore drilling) 

Tidal Waves (tsunamis) 
(safety of shore areas) 


Ocean Turbulence 
and Mixing 
(pollution, air-sea 

Storm Surges 
(safety of 
shore areas, 
hurricane damage) 





Circulation of 
Inland Seas 
(Great Lakes 
pollution, polar 
pack-ice models) 

Circulation in 
Ocean Basins 
(long-range weather 
forecasting, fisheries, 
climatic change) 

The chart classifies waves and circulations as functions of time and distance. 



Ocean Circulation — Over the past 
decade, three-dimensional numerical 
models for calculating ocean circula- 
tion have been developed by the So- 
viet Hydrometeorological Service and 
NOAA. The methods used are sim- 
ilar to those of numerical weather 
forecasting. Given the flux of heat, 
water, and momentum at the upper 
surface, the model predicts the re- 
sponse of the currents at deeper 
levels. The currents at deeper levels 
in turn change the configuration of 
temperature and salinity in the model 

Although active work in develop- 
ing these models is being conducted 
at several universities, the only pub- 
lished U.S. calculations are based 
on the "box" model developed at 
NOAA's Geophysical Fluid Dynamics 
Laboratory. This model allows the 
inclusion of up to 20 levels in the 
vertical direction and a detailed treat- 
ment of the bottom and shore con- 
figuration of actual ocean basins. 

Cox's calculation of the circulation 
of the Indian Ocean is perhaps the 
most detailed application yet at- 
tempted with the NOAA "box" 
model. Using climatic data, it was 
possible to specify the observed dis- 
tribution of wind, temperature, and 
salinity at the surface as a function of 
season. The model was then able to 
make an accurate prediction of the 
spectacular changes in currents and 
upwelling in response to the changing 
monsoons that were measured along 
the African coast during the Indian 
Ocean Expedition of the early 1960's. 

Application of the Model to Prac- 
tical Problems — The numerical mod- 
els designed for studying large-scale 
ocean circulation problems can be 
modified to study more local circula- 
tion in near-shore areas or inland 
seas such as the Great Lakes. Thus, 
numerical models may be useful for 
the many problems in oceanography 
in which steady currents play a role. 
A partial list includes: (a) long-range 
weather forecasting; (b) fisheries fore- 

casting; (c) pollution on a global or 
local scale; and (d) transportation in 
the polar ice-pack. 

Needed Advances 

The Data Base — Standard oceano- 
graphic and geochemical data provide 
a fairly adequate base for modeling 
the time-averaged, mean state of the 
ocean. The data base for modeling 
the time-variability of the ocean is 
extremely limited, however. Infor- 
mation on large-scale changes in 
ocean circulation as well as the small- 
scale variability associated with mix- 
ing in the ocean have not been gath- 
ered in any comprehensive way. 

Future progress in ocean modeling 
will depend on more detailed field 
studies of ocean variability. Such 
studies will establish the data base 
for the formulation of mixing by 
small-scale motions which must be 
included in the circulation model. 
Information on large-scale variability 
will provide a means for verifying the 
predictions of the models. 

Technical Requirements — The 
most promising approach appears to 
be the different arrays of automated 
buoys that have been proposed as 
part of the International Decade of 
Ocean Exploration (IDOE) program. 
Coarse arrays covering entire ocean 
basins, as well as detailed arrays for 
limited areas, will be required. 

Another technical requirement for 
ocean modeling is common to a great 
many other scientific activities: the 
steady development of speed in elec- 
tronic computers and the steady de- 
crease in unit cost of calculations. 

Manpower Training — Numerical 
models of currents have now reached 
a point where they can be of great 
value in the planning of observational 
studies and the analysis of data col- 
lected at sea. The models can be used 
in diagnostic as well as predictive 
modes. This is particularly true of 

the buoy networks proposed as part 
of the IDOE. In order to do this, 
however, more oceanographers will 
need to be trained to use the numeri- 
cal models and to carry out the com- 
putations. This action will have to 
be taken quickly if numerical models 
are to have much signficance in IDOE 

Application of Ocean Modeling 
in Human Affairs 

As pointed out by Revelle and 
others, a large fraction of the added 
carbon dioxide (CO-) generated by 
the burning of fossil fuels is taken 
up by the oceans. However, few 
details are known concerning the 
ocean's buffering effect and how long 
it will continue to be effective. The 
ability of the ocean to take up CO- 
depends very much on how rapidly 
surface waters are mixed with deeper 
water. More detailed studies of geo- 
chemical evidence and numerical 
modeling are essential to get an un- 
derstanding of this process. A start 
in numerical modeling of tracer dis- 
tributions in the ocean has been 
made by Veronis and Kuo at Yale 
University and Holland at the NOAA 
Geophysical Fluid Dynamics Labora- 

Another urgent task is to make an 
assessment of the effect of CO- and 
particulate matter in the atmosphere 
on climate. Present climatic knowl- 
edge does not allow reliable quan- 
titative predictions of the "green- 
house effect" due to CO- or the 
screening out of direct radiation by 
particulate matter. Published esti- 
mates have been based on highly 
simplified models that treat only the 
radiational aspects of climate. But 
no climate calculation is complete 
without taking into account the cir- 
culation of both the atmosphere and 
the ocean. Some preliminary climatic 
calculations have been carried out 
with combined numerical models of 
the ocean and atmosphere. But 
greater effort is required to develop 



more refined ocean models if these 
climatic calculations are to be reliable 
enough to be the basis for public 
policy decisions on pollution control. 

Time-Scale of Significant Ad- 
vances — Since published papers on 
three-dimensional ocean circulation 
models have only recently begun to 

appear, rapid development should 
continue for at least another five 
years along present lines. In that 
time, ocean models should have 
reached about the same level of 
development as the most advanced 
atmospheric numerical models today. 
Within five years, at least the feasi- 
bility of application of numerical 

modeling to small- and large-scale 
pollution studies, long-range weather 
forecasting, and hydrographic data 
analysis should be well established. 
Another five years will probably be 
required to work out standard pro- 
cedures for using numerical ocean 
circulation models in these applica- 

Effects of Antarctic Water on Oceanic Circulation 

Except for a relatively thin (slightly 
less than one kilometer) warm surface 
layer in the tropics and subtropics, 
the ocean is basically cold and fairly 
high in dissolved oxygen content. 
Ninety percent of the ocean is colder 
than 8 centigrade, with an oxygen 
content generally from 50 to 90 per- 
cent of the saturation level. This 
warm surface layer, because of its 
high stability, acts as an impervious 
cap over the cold abyssal water, 
blocking renewal (by the usual tur- 
bulent transfer methods) of the oxy- 
gen that has been consumed by 
various biological processes. 

warm and low-oxygen-content cir- 
cumpolar deep water (CDW) slowly 
flows southward and upward. Even- 
tually, it reaches the near-surface 
layers at the wind-produced Antarc- 
tic Divergence. Here, the intense 
thermohaline alteration resulting from 
the sea-air interaction converts the 
CDW into "antarctic surface water" 
(AASW), which is cold (near freez- 
ing, —1.6° to —1.9° centigrade) 
and relatively fresh. Some of the 
CDW is converted by more intense 
thermohaline alterations due to ice 
formation into a fairly dense con- 
tinental shelf water. At certain times, 

this shelf water drops to the sea floor 
where, on mixing with additional 
CDW, it forms the "antarctic bottom 
water" (AABW); neither the times 
nor the exact locations of the vertical 
motion are adequately known. The 
AABW has worldwide influence. It 
reaches far into the northern hemi- 
sphere in the western Atlantic and 
Pacific oceans. 

Though we do not know how the 
shelf water is produced, three meth- 
ods appear to be likely: (a) sea-ice 
formation; (b) freezing, melting, or 
a combination of these at the floating 

Why, therefore, is the bulk of the 
ocean so cold and highly oxygenated? 
In studying the relationship of tem- 
perature to salinity in the cold abyssal 
waters of the world ocean, one is 
struck by its similarity to that found 
in antarctic waters. This suggests 
that the oceanographic processes oc- 
curring in antarctic waters influence, 
in a direct way, the physical and 
chemical properties of much of the 
ocean's abyssal water. One may 
think of the antarctic region as a 
zone in which the abyssal waters can 
"breathe," renew their oxygen sup- 
ply, and release to the atmosphere the 
heat received at more northern lati- 

The Antarctic Water Masses 







S S™ E -^^7i~<i 

J /C J jV __ __ _WATER_ ,- 7 C ^J,^ -« 
_-__ _- /" ff ", ,UPPER| / 

^ARCTIC ¥te7mE0\M£ -/ 7 $/ 

WATER / j y{i 










The basic circulation pattern along 
a north-south plane in antarctic wa- 
ters is shown in Figure IV-3. The 

This figure shows the position, circulation, and interaction of the several water 
masses found in the antarctic region. 



base of the extensive ice shelves of 
Antarctica; (c) rapid cooling and evap- 
oration resulting from the outbreaks 
of cold, dry antarctic air masses. 
Recently, a fourth method was pro- 
posed which is based purely on 
molecular exchange of heat and salt 
between a warm salty lower layer 
and a cold fresh upper layer. Which 
of these methods is the dominant one 
is not known. Method (a) has gen- 
erally been considered the key 
method; however, recent studies 
show that method (b) may be most 
important. It is probable that all 
the methods are active to a varying 
degree, depending on the location, 
and a variety of AABW types are 

The AASW flows slowly north- 
ward (see Figure IV-3); on meeting 
the less dense sub-antarctic water 
(near 55" S.), it sinks, contributing 
to the "antarctic intermediate water" 
(AAIVV). The cold, relatively fresh 
AAIW flows northward at depths of 
nearly one kilometer. It reaches the 
equator in the Indian and Pacific 
oceans and up to 20 N. in the 
Atlantic Ocean. 

The zone where the AAIW forms 
is called the "polar front zone," or 
Antarctic Convergence. The proc- 
esses occurring within the zone are 
not understood; even the concept of 
a "convergence" process is question- 
able. The structure and position of 
the polar front zone varies with time. 
How, and in what frequency, and 
how it influences the AAIW forma- 
tion are not known at all. The polar 
front zone should be subjected to 
much study in the coming years. 
It is of major importance to the 
overturning process of ocean waters 
and to climatic characteristics of the 

southern hemisphere and perhaps the 
world. The only way to study this 
feature effectively is by multi-ship 
expeditions and/or time-series meas- 
urements from numerous anchored 
arrays of instruments. 

Exclmngc of Water Masses — From 
salt studies, the general rate of me- 
ridional exchange has been deter- 
mined. The CDW southward trans- 
port is 77 million cubic meters per 
second, of which only 15 million cubic 
meters per second have been derived 
from the sub-arctic regions (mainly 
from the North Atlantic). The rest is 
the return flow from the two north- 
ward antarctic components (AAIW 
and AABW). The CDW also brings 
heat into the antarctic region. It is 
calculated that 14 to 19 kilogram 
calories per cm 2 per year are released 
into the atmosphere by the ocean. 
This has a great effect in warming 
the antarctic air masses and, hence, 
in modifying the influence of Antarc- 
tica on world climate. 

The exchange of CDW for AAIW 
and AABW has the important result 
of taking out the warm, low-oxygen- 
ated water and replacing it with cold, 
high-oxygen-content water. Were it 
not for this, the abyssal waters would 
warm considerably by geothermal 
heating and downward flux of heat 
across the thermocline. They would 
also become devoid of oxygen by 
organic decomposition. 

Need for More Information 

Though the gross features of the 
circulation pattern can be found, we 
do not know enough detail about 
the process of conversion of CDW 
into the antarctic water masses. In 
what regions does this conversion 

take place? Is it seasonal or does it 
vary with another frequency? By 
what methods is the CDW converted 
into antarctic water masses? 

To accomplish these tasks, long 
time-series measurements of currents, 
temperature, and salinity are needed 
along the continental margins of Ant- 
arctica and within the polar front 
zone. Multi-ship expeditions and 
satellite observations would also be 
useful in studying time-variations of 
the water structure. Geochemical 
studies of the isotopic makeup of the 
ice and sea water are necessary to 
yield information as to "residence" 
times within water masses and in- 
sight into methods of bottom-water 

The antarctic waters are also of 
importance in that they connect each 
of the major oceans via a circum- 
polar conduit. The rate of the cir- 
cumpolar flow is not known, though 
recent studies indicate a volume 
transport of well over 200 million 
cubic meters per second, making it 
the largest current system in the 
world ocean. A program of direct 
current observations is needed to 
study the circumpolar current. Satel- 
lite surveillance of drogues will be 
a useful method to study the current 

In short, scientists need to know 
in more detail the methods, rates, 
and location of the formation of the 
antarctic water masses. They can 
accomplish this task by hydrographic 
and geochemical observations in cir- 
cumpolar waters using modern tech- 
niques. In addition, detailed time- 
series observations would be needed 
at particular points such as the Wed- 
dell Sea, Ross Sea, the Amery Ice 
Shelf, and other appropriate regions. 

Tropical Air-Sea Rhythms 

Tropical air-sea rhythms are best 
seen in the time-series of air and 
sea temperature at Canton Island, 
an equatorial island in the Pacific 

(2°48'S. 171°43'W.); this is the only 
locality where temperature observa- 
tions have been maintained uninter- 
ruptedly over a long period, 1950 

through 1967. However, there is now 
no way of continuing this important 
time-series because the Canton Island 
observatory, with its modern equip- 



ment for aerological data-gathering, 
was abandoned in September 1967 
for economy reasons. 

Air-sea data from near-equatorial 
islands has great importance because 
the sea temperature in such localities 
is subject to fluctuations of much 
greater amplitude than in the ad- 
jacent trade-wind belts of either 
hemisphere. As a consequence, the 
heat supplied from the ocean to the 
atmosphere near the equator becomes 
the most variable part of the total 
tropical ocean-to-atmosphere heat 
flux, which in turn is the major con- 
trol of the global atmospheric circu- 
lation. It is, therefore, logical to 
expect that ocean temperature fluc- 
tuations near the equator will influ- 
ence atmospheric climate outside of 
tropical latitudes. This action by re- 
mote control through the global at- 
mospheric circulation is here referred 
to as "teleconnections." 

According to preliminary findings, 
the teleconnections from the Pacific 
equatorial air-sea rhythms are major 
factors — perhaps, in many cases, 
the dominant factor — in creating 
rhythms of climatic anomalies any- 
where on the globe. Hence, these 
teleconnections must be understood 
before climatic anomalies can be 
predicted successfully. 

General Characteristics 

The following facts stand out from 
the Canton Island record. (See Figure 

1. Sea temperatures vary over a 
greater range than air tem- 

2. In periods of cold ocean the 
air is warmer than the sea, 
whereas in periods of warm 
ocean the air is colder than the 

3. Heavy monthly rainfall occurs 
only during periods of warm 

It is known from atmospheric ther- 
modynamics that the heating of the 

atmosphere over a tropical ocean 
takes place mainly through the heat 
of condensation within precipitating 
cloud. Hence, the rainfall record is 

also a record of the major year-to- 
year variations of the atmospheric 
heat supply from the ocean. Those 
variations showed rhythms of about 




8 4' 























A/V v ' 

— ■' -=A 

*0£' v >.., ^ 

f. * 

A ( \ 








rf ■ 








771 1 

526 5 


197 8 






1596 6 



m-i n 

759 2 

J hTI rh-rTHT-rrl 



28 9° 
















/ . 

\ . 

">*IR ^ 


f J y^' 


x."*^ — ' s^ 






1 n 



4Th-rr " 

, n-T 


401 6 mm 



1432 8 

i □ ! Q 


28 9° 




The figure shows a time-series of monthly air and sea temperatures and monthly 
precipitation amount as measured at Canton Island from 1950 through 1967. 



two years' periodicity, especially dur- 
ing the 1960's; at other times the 
rhythms were less regular. 

The mechanism of the equatorial 
air-sea rhythms is illustrated in Fig- 
ure IV-5, which shows that a six- 
month, smoothed time-series of 
atmospheric pressure in Djakarta, 
Indonesia (6°S. 107 E.), exhibits the 
same long-period trends as the sea- 
surface temperatures measured at 
Canton Island and by ships crossing 
the equator at 165W. When the 
barometric pressure in Djakarta is 
lower than normal, the equatorial 
easterlies heading for the Indonesian 
low become stronger than normal; 
this automatically intensifies the 
Pacific equatorial upwelling and cools 
the sea surface. The parallelism of 
the time-series of Djakarta pressure 
and Canton Island sea temperature 
is thereby assured. 

If wind profiles are observed along 
the equator at two opposite phases 
of the air-sea rhythm, as exemplified 
by November 1964, with its cool 
ocean and aridity, and November 
1965, with its warm ocean and abun- 
dant rainfall at Canton Island, it is 

found that in November 1964 the 
equatorial easterlies swept uninter- 
ruptedly from South America past 
Canton Island toward a deeper-than- 
normal Indonesian low, whereas in 
November 1965 they stopped short 
of reaching Canton Island. The equa- 
torial upwelling — a by-product of 
the equatorial easterlies — extended 
almost to Indonesia in November 
1964, while being confined to a much 
smaller area east of Canton Island 
a year later. Concomitantly, the 
equatorial rainfall was confined to the 
neighborhood of Indonesia in No- 
vember 1964; the following year it 
expanded from the west to beyond 
Canton Island, while Indonesia suf- 
fered serious drought. 

The propulsion of the air-sea 
rhythms resides in the atmospheric 
thermally driven equatorial circula- 
tion over the Pacific, which has its 
heat source (by condensation) in the 
rising branch, and heat sink (by 
radiative deficit insufficiently com- 
pensated by scarce precipitation) in 
its descending branch near South 
America. The oceanic counterpart to 
this atmospheric circulation is, in 
part, the westward surface drift and 


The diagram shows the similarities in trend ot the time-series of sea temperature 
and pressure measured at and near the equator in the southern hemisphere. The 
dotted curve that follows that for Djakarta is based on data from Singapore. The 
rapid oscillations of the sea-temperature curve measured at the equator in 1958 and 
1959 result from more frequent ship crossings — and hence a greater density of short- 
period detail — rather than from any unusual natural activity. 

the subsurface return flow and, addi- 
tionally, the circulation consisting of 
an upwelling thrust at the equator 
and sinking motion to the north and 
south of the equator. These ocean 
circulations are wind-driven and in- 
trinsically energy-consuming, but they 
exert a powerful feedback upon the 
atmosphere by slowly varying the 
areal extent of warm water at the 
equator and thereby varying the ther- 
mal input for the global atmospheric 

In November 1964, when cool up- 
welling water occupied almost the 
whole Pacific equatorial belt, the at- 
mosphere received less heat than in 
November 1965, when the upwelling 
had shrunk back into a smaller east- 
ern area. Consequently, the tropical 
atmosphere swelled vertically from 
1964 to 1965. This swelling was 
most conspicuous over the Pacific 
at 160 W. longitude. Moreover, the 
swelling of the tropical atmosphere 
had spread all around the global 
tropical belt between 1964 and 1965, 
a global adjustment that is inevitable, 
since pressure gradients along the 
equator must remain moderate. 

North and south of the swelling 
atmosphere in the tropical belt, the 
gradient of 200-millibar heights in- 
creased from November 1964 to 
November 1°65, which indicated 
increasing westerly winds in the 
globe-circling subtropical jet streams. 
This can best be documented in the 
longitude sector from the area of 
Pacific equatorial warming eastward 
across North America and the At- 
lantic to the Mediterranean. 

The corresponding change at sea 
level could be seen most dramatically 
over Europe, where the moving low- 
pressure centers abandoned their 
normal track by way of Iceland to 
Scandinavia and, instead, in Novem- 
ber 1965 moved parallel to the 
strengthened subtropical jet stream 
and invaded central and southern 

Other associated rearrangements 
involved the arctic high-pressure sys- 



tern, which in November 1965 was 
displaced toward northern Europe 
and, consequently, on the Alaskan 
side of the pole left room for the 
moving low-pressure systems from 
the Pacific to penetrate farther north 
than normal. 

So much for a description of the 
air-sea rhythms. Supporting evidence 
is available from a few other case 
histories. The motivation for con- 
tinued research on the equatorial air- 
sea rhythms is the desire to develop 
skill in forecasting climatic anomalies. 

Current Scientific Knowledge 

The data base is, unfortunately, 
scanty. As mentioned earlier, Canton 
Island is the only place where a 
continuous record of the near-equa- 
torial air-sea interaction was main- 
tained; even there, scientific knowl- 
edge of the air-sea rhythms, extending 
vertically to great heights in the 
atmosphere, must be based mainly 
on a study of the years from 1950 
through 1967. 

Oceanographic cruises in the equa- 
torial belt have been few and far 
between in space and time. The 
EASTROPAC Program, a series of 
internationally coordinated cruises in 
the eastern tropical Pacific and trans- 
equatorial cruises in the mid-Pacific, 
sponsored by the U.S. National Ma- 
rine Fisheries Service (NMFS), Hono- 
lulu, has been the best oceanographic 
effort to date to explore air-sea in- 
teraction in the critical area where 
the air-sea rhythms originate. Eess 
sophisticated, widely scattered ob- 
servations are available from com- 
mercial ships. Those collected by the 
NMFS in Honolulu from commercial 
ships that ply the route from Hawaii 
to Samoa have provided a time-series 
of equatorial sea temperature at 
165°W., together with the corre- 
sponding sea-temperature series at 
Canton Island. The two records agree 
rather well as far as the long rhythms 
are concerned. 

Organized reporting of sea and 
air temperatures from commercial 
ships crossing the east and central 
part of the Pacific tropical zone is in 
good hands with the NMFS in La 
Jolla, California; the monthly maps 
issued by that institution are at 
present the best source of informa- 
tion on tropical air-sea rhythms. 

The Status of Instrumentation ■ — 
An important technical improvement 
in the ocean data reported from 
commercial ships will come soon. 
Selected ships will be equipped with 
Expendable Bathy-Thermographs 
(XBT) to enable them to monitor the 
varying heat storage in the ocean 
down to the thermocline. 

Anchored buoys can provide the 
same information as XBT-equipped 
commercial ships and will have the 
advantage of delivery data for long 
time-series at fixed locations. The 
buoys that can be permanently fi- 
nanced should preferably be placed 
to fill the big gaps between fre- 
quented shipping lanes. Also, their 
locations should be selected where 
ocean temperatures are likely to vary 
significantly, for instance along the 

Infrared radiometers on satellites 
can be adjusted to record sea tem- 
perature in cloud-free areas, but the 
accuracy of such measurements can- 
not quite compare with careful ship- 
or buoy-based observations. The 
great contributions of the satellites 
to tropical studies are — presently 
and in the future — the TV-mapping 
of cloud distribution, the temperature 
measurements of the top surface of 
cloud, and, under favorable condi- 
tions, the movement of individual 
clouds and cloud clusters. 

Fixed installations on tropical is- 
lands will continue to be important 
for research on ocean-atmosphere in- 
teraction. Aerological soundings, in- 
cluding upper wind measurements, 
are best done from islands; moreover, 
fundamental measurements like the 
time variations of the topography of 

ocean level can only be done with 
a network of island-based tide 
gauges. The latter job does not call 
for very expensive equipment, and 
the tide gauges can be serviced as 
part-time work by trained islanders; 
the aerological work, on the other 
hand, calls for a technologically 
skilled staff on permanent duty. 

Replacements for Canton Island as 
an aerological observatory would be 
relatively expensive, but yet cheaper 
than was Canton, if islands with 
stable native population were selected 
for observatory sites. The two British 
islands of Tarawa (l°2l'N. 172°56'E.) 
and Christmas (1°59'N. 157°29'W.) 
would be ideal choices. 

Mathematical Modeling — A crude 
modeling of an asymptotically ap- 
proached "steady state" of an equa- 
torial ocean exposed to the stress of 
constant easterly winds has been pro- 
duced by Bryan, of the Geophysical 
Fluid Dynamics Laboratory, NOAA. 
A corresponding, quickly adjusting 
atmospheric model of the equatorial 
circulation, such as observed over the 
Pacific, was described in 1969 by 
Manabe, also of the Princeton NOAA 

Presumably, the ocean and atmos- 
pheric models can soon be joined for 
a simulation of the equatorial air- 
sea rhythms. Even without mathe- 
matical formulation, the rhythm can 
be crudely visualized to operate as 

The cooling phase of the rhythm 
begins when the equatorial easterlies 
of the eastern Pacific start increasing 
and thereby start intensifying the 
upwelling. This increases the tem- 
perature deficit of the eastern end 
of the oceanic equatorial belt com- 
pared to its western end. The asso- 
ciated feedback upon the atmosphere 
shows up in an increased east-west 
temperature contrast, which produces 
an increment of kinetic energy in 
the equatorial atmospheric circula- 
tion. This, in turn, feeds back into 



increasing upwelling and ocean-cool- 
ing over an increasing area. 

A corresponding chain reaction can 
he visualized for the phase of the 
rhythm characterized by decreasing 
easterly winds, decreasing upwelling, 
and increasing equatorial ocean 
warming. Hence, a slow vacillation 
between the two extreme phases of 
equatorial atmospheric circulation, 
rather than a stable steady-state 
equatorial circulation, becomes the 
most likely pattern. 

Simulation experiments are pres- 
ently being planned on a global basis, 
encompassing both ocean and at- 
mosphere; they will bring more pre- 
cise reasoning into the explanation 
of the equatorial air-sea rhythms 
and, hopefully, into the interpretation 
of their teleconnections outside the 
tropics. Both the Princeton team, un- 
der Smagorinsky, and the team at 
the University of California, at Los 
Angeles, under Mintz and Arakawa, 
are progressing toward that goal. 

Requirements for Scientific 

Continued empirical study of the 
tropical air-sea rhythms, in past and 
in real-time records, should accom- 
pany and support modeling efforts of 
theoretical teams. The knowledge 
gained on tropical air-sea rhythms 
and their extratropical teleconnec- 
tions so far rests on the study of 
only a limited number of case his- 
tories. Much more can be learned by 
studying the whole sequence of years 
1950-67, during which Canton Island 
was available as an indicator of the 
air-sea rhythms. These years include 
the International Geophysical Year 
period, which happened to exhibit 
some extreme climatic anomalies and 
also had better-than-normal global 
data coverage. 

Such investigations are relatively 
cheap. The main expense goes into 
the plotting and analysis of world 
maps of monthly climatic anomalies 

in several levels up to the tropopause. 
Such a system of climatic anomaly 
maps would be the empirical tool for 
tracking the mechanism of the tele- 
connections. Liaison with EASTRO- 
PAC and other post-1950 Pacific 
tropical oceanographic research would 
become a natural outgrowth of the 
"historical" study. 

The 1970's is to be the era of the 
International Decade of Ocean Ex- 
ploration (IDOE) as well as that 
of the Global Atmospheric Research 
Program (GARP). The study of trop- 
ical air-sea rhythms belongs within 
the scope of both of these worldwide 
research enterprises and, indeed, will 
serve to tie the two together. The 
ultimate goal of IDOE-plus-GARP 
should be to model the atmosphere 
and the world oceans into one com- 
prehensive system suitable for elec- 
tronic integration. That endeavor 
should produce meaningful progress 
toward climatic forecasting by the 
end of the 1970's. 



Modeling the Global Atmospheric Circulation 

An understanding of the structure 
and variability of the global atmos- 
pheric circulation requires a knowl- 
edge of: 

1. The quality and quantity of 
radiation coming from the sun. 

2. The atmospheric constituents — 
not only the massive ones, but 
also such thermodynamically 
active components as water va- 
por, carbon dioxide, ozone, and 
clouds as well as other partic- 
ulates. Furthermore, one must 
understand the process by 
which these constituents react 
with the circulations and their 
radiative properties — i.e., ab- 
sorption, transmission, scatter- 
ing, and reflection. 

3. The processes by which the 
atmosphere interacts with its 
lower boundary in the trans- 
mission of momentum, heat, 
and water substance over land 
as well as sea surfaces. The 
behavior of the atmosphere 
cannot be considered independ- 
ent of its lower boundary be- 
yond a few days. In turn, the 
lower boundary can react sig- 
nificantly. Even the surface 
layers of the oceans have im- 
portant reaction times of less 
than a week, while the deeper 
ocean comes into play over 
longer periods. Hence, the 
evolution of the atmospheric 
circulation over long periods 
requires consideration of a dy- 
namical system whose lower 
boundary is below the earth's 

4. The interactions of the large- 
scale motions of the atmos- 
phere with the variety of 

smaller-scale motions normally 
present. If these smaller scales 
have energy sources of their 
own, as is the case in the at- 
mosphere, the nature of the 
interactions will be consider- 
ably complicated. 

In principle, mathematical models 
embodying precise statements of the 
component physical elements and 
their interactions provide the means 
for numerically simulating the nat- 
ural evolution of the large-scale at- 
mosphere and its constituents. Suc- 
cessful modeling would have potential 
applications in a number of areas: 
long-range forecasting; determination 
of the large-scale, long-term disper- 
sion of man-made pollutants; the 
interaction of these pollutants in in- 
advertently altering climate; the in- 
fluence of intentionally tampering 
with boundary conditions to arti- 
ficially modify the climate equilib- 
rium. No doubt there are a variety 
of other applications of a simulation 
capability to problems that may not 
yet be evident. 

Current Status 

Efforts to model the large-scale 
atmosphere and to simulate its be- 
havior numerically began more than 
twenty years ago. As additional re- 
search groups and institutions in the 
United States and elsewhere became 
involved, steady advances in model 
sophistication followed. These came 
from refinements in numerical meth- 
ods as well as from improved formu- 
lations of the component processes. 

Today's multi-level models account 
for a variety of interacting influences 
and processes: large-scale topographic 

variations; thermal differences be- 
tween continents and oceans; varia- 
tions in roughness characteristics; 
radiative transfer as a function of an 
arbitrary distribution of radiatively 
active constituents; large-scale phase 
changes of water substance in the 
precipitation process; interactions 
with small-scale, convectively un- 
stable motions; the thermal conse- 
quences of variable water storage in 
the soil; and the consequences of 
snow-covered surfaces on the heat 
balance. More recently, combined 
models have taken into account the 
mutual interaction of the atmosphere 
and ocean, including the formation 
and transport of sea-ice. 

Although many of these elements 
are rather crudely formulated as cogs 
in the total model, it has been pos- 
sible to simulate with increasing detail 
the characteristics of the observed 
climate — not only the global wind 
system and temperature distribution 
from the earth's surface to the mid- 
stratosphere, but also the precipita- 
tion regimes and their role in forming 
the deserts and major river basins 
of the world. Attention is beginning 
to be given to the simulation of 
climatic response to the annual radia- 
tion cycle. 

Detailed analyses of such simula- 
tions in terms of the flow and trans- 
formation of energy from the primary 
solar source to the ultimate viscous 
sink show encouragingly good agree- 
ment with corresponding analyses of 
observed atmospheric data. Such 
models have also been applied to 
observationally specified atmospheric 
states in tests of transient predict- 
ability. Even within the severe limita- 
tions of the models, the data, and the 
computational inadequacies, it has 
been possible to simulate and verify 



large-scale atmospheric evolutions of 
the order of a week. These advances 
give promise that, as known deficien- 
cies are systematically removed, the 
practical level of the large-scale pre- 
dictability of the atmosphere can 
converge on a theoretical determin- 
istic limitation of several weeks. 

Models have also been used in 
some, more limited applications. For 
example, an attempt was made to 
simulate the long-term, large-scale 
dispersion of inert tracing material, 
such as radioactive tungsten, which 
had been released at an instantaneous 
source in the lower equatorial tropos- 
phere. The results were surprisingly 
good. Only limited attempts have 
been made to apply extant models to 
test the sensitivity of climate to small 
external influences. The reason is 
that one normally seeks to detect 
departures from fairly delicately bal- 
anced states. It is often beyond the 
current level of capability to simulate 
an abnormal response that is com- 
parable in magnitude to the natural 
variability noise level. 

Observational Problems 

The present large-scale data base 
is essentially dictated by the extent 
of the operational networks created 
by the weather forecast services of 
the world. The existing network is 
hardly adequate to define the north- 
ern-hemisphere extratropical atmos- 
phere; it is completely inadequate in 
the southern hemisphere and in the 
equatorial tropics. For example, there 
are only 50 radiosonde stations in 
the southern hemisphere in contrast 
to approximately 500 in the northern. 
The main difficulties arise from the 
large expanses of open ocean which, 
by conventional methods, impede de- 
termination of the large-scale com- 
ponents of atmospheric structure 
responsible for the major energy 
transformations. This critical defi- 
ciency in the global observational data 
store makes it difficult to define the 
variability of the atmosphere in 
enough detail to discern systematic 

theoretical deficiencies. Furthermore, 
the data are inadequate for the spec- 
ification of initial conditions in the 
calculation of long-range forecasts. 

Recent dramatic advances in in- 
frared spectroscopy from satellites 
promise significant strides in defining 
the state of the extratropical atmo- 
sphere virtually independent of loca- 
tion. (See Figure IV-6) However, the 
motions of the equatorial tropical at- 
mosphere lack strong rotational cou- 
pling, making the observational prob- 
lem there more acute. Independent 

wind determinations may be needed 
as well as the information supplied by 
a Nimbus 3 (SIRS sensor) type satel- 
lite. It is not yet known to what ex- 
tent balloon-borne instrumentation or 
measurements from ocean buoys will 
be needed to augment satellite obser- 
vations, especially in the lower tropos- 
phere. This will depend on just 
how strongly the variable character- 
istics of the atmosphere are coupled. 
A more precise knowledge would per- 
mit relaxing observational require- 
ments for an adequate definition of 
its structure. 


















(Illustration Courtesy ot the American Meteorological Society ) 
This figure shows the broad similarities between simultaneous temperature sound- 
ings obtained by radiosonde equipment and satellite-borne SIRS (Satellite Infrared 
Spectrometer) and IRIS (Infrared Interferometer Spectrometer) systems. The latter 
systems, however, are able to provide far broader and more continuous coverage 
than conventional equipment. Further work is in progress to overcome the difficul- 
ties of the present instruments in predominantly overcast areas. 



There is some controversy as to 
the inherent deterministic limitations 
of the predictability of the atmos- 
phere — say, for scales corresponding 
to individual extratropical cyclones. 
The span of controversy ranges from 
about one to several weeks. More- 
over, it is not known at all whether 
longer-term characteristics of atmos- 
pheric variability are determinate. 
For example, is it inherently possible 
to distinguish the mean conditions 
over eastern United States from one 
January to another in some deter- 
ministic sense? In the equatorial 
tropics there is very little insight as 
to the spectrum of predictability. 

Needs for Future Improvements 

Broadly, there are three areas that 
require intensive upgrading, the first 
two of which are essentially tech- 

Technological Requirements — The 
need for establishment of an adequate 
global observing system has already 
been discussed. In addition, com- 
puters two orders of magnitude faster 
than those currently available are 
needed to permit the positive reduc- 
tion of mathematical errors incurred 
by inadequate computational resolu- 
tion. Faster computers will also per- 
mit more exhaustive tests of model 
performance over a much larger 
range of parameter-space to assess 
the sensitivity of simulations to 
parameterizations of physical process 
elements of the model. Faster com- 
puters will also provide an ability 
to undertake the broad range of ap- 
plications implied by a more sophisti- 
cated modeling capability. 

Scientific Requirements — The sci- 
entific requirements stem from the 
necessity of refining the formulation 
of process elements in the models. 
To cite a few: boundary-layer inter- 
actions — to determine the depend- 

ence of the heat, momentum, and 
water-vapor exchange within the 
lower kilometer of the atmosphere 
as a function of the large-scale struc- 
tural characteristics; internal turbu- 
lence — to determine the structure 
and mechanisms responsible for in- 
termittent turbulence in the "free" 
atmosphere, which is apparently re- 
sponsible for the removal of signifi- 
cant amounts of energy from the 
large scale and may also play a role 
in the diffusion of heat, momentum, 
and water vapor; and convection — 
to determine how cumulus overturn- 
ing gives rise to the deep vertical 
transport of heat, water vapor, and, 
possibly, momentum. 

We still do not know the con- 
sequences of particulates, man-made 
or natural, either directly on the 
radiative balance or ultimately on 
the dynamics. 

In the tropics, we have yet to com- 
pletely understand the instability 
mechanisms responsible for the for- 
mation of weak disturbances or the 
nature of an apparent second level 
of instability which transforms some 
of these disturbances into intense 
vortices, manifested as hurricanes 
and typhoons. Without an under- 
standing of the intricacies of the 
tropics, it is impossible to deal com- 
prehensively or coherently with the 
global circulation, particularly with 
the interactions of the circulation of 
one atmosphere with that of the 

Most of these critical scientific 
areas of uncertainty require intensive 
phenomenological or regional obser- 
vational studies. These will provide 
the basic data as foundations for a 
better theoretical understanding. 

Any one of the general scientific 
and technological categories listed 
above may at any one time provide 
the weakest link in the complex 

required to advance a modeling and 
simulation capability. Obviously, 
then, they must be upgraded at com- 
patible rates. 


A comprehensive look at the status, 
needs, and implications of an under- 
standing and simulation capability of 
the global circulation is embodied in 
the Global Atmospheric Research 
Program (GARP), which was estab- 
lished several years ago as an inter- 
national venture under the joint 
auspices of the World Meteorological 
Organization and the International 
Council of Scientific Unions. In the 
United States, GARP is overseen by 
a National Academy of Sciences 
committee that has produced a plan- 
ning document for U.S. national par- 
ticipation. Almost all the problem 
areas discussed above have come to 
the attention of the U.S. Committee 
for GARP. The international time- 
scale for major field experiments ex- 
tends into the late 1970's. Concomi- 
tantly, national and international 
research programs to support and 
derive results from the field programs 
will be established. The time-scales 
governing GARP planning imply that 
one can expect the necessary elements 
to be systematically undertaken over 
about a ten-year period. The first 
GARP tropical experiment will take 
place in 1974 in the eastern equatorial 
Atlantic and the first GARP global 
data-gathering experiment is sched- 
uled for 1976 or later. GARP is the 
research part of the World Weather 
Program (WWP). The other part of 
the WWP is the World Weather 
Watch (WWW), whose objective is 
to bring the global atmosphere under 
surveillance and provide for the 
rapid collection and exchange of 
weather data as well as the dissemi- 
nation of weather products from cen- 
tralized processing centers. GARP 
will rely heavily on data obtained 
from the WWW. (See Figure IV-7) 





The map shows the current and planned global radiosonde network. The current 
network is adequate only over Europe, central Asia, and the United States. The 
planned additions will add greatly to our knowledge of the southern hemisphere; they 
are part of the first phase of the World Weather Watch. 



Short-, Medium-, and Long-Term Forecasting 

The effects of weather on human 
activities and the importance of ac- 
curate weather predictions and timely 
weather warnings for human safety 
and comfort hardly need stating. The 
farmer, the seafarer, the aviator, the 
man on the street all share a com- 
mon concern for the weather. Hurri- 
canes, tornadoes, floods, heavy snows, 
and other severe weather phenomena 
take a heavy toll of lives and cause 
billions of dollars loss in damage and 
disruption each year. The cost would 
be even greater were it not for 
weather warnings and forecasts. 

As the science of weather predic- 
tion grows, it touches an ever wider 
range of human problems. Today 
there is much concern about air pol- 
lution and the possible effects of pol- 
lutants on weather and climate. The 
mathematical models used in numeri- 
cal weather prediction provide the 
best known means of determining 
how pollutants are spread over large 
distances and how they might affect 
weather patterns. Furthermore, math- 
ematical modeling has reached the 
stage where interactions of the at- 
mosphere with the ocean can be 
taken into account. This development 
opens up the possibility of predicting 
changes in the physical state of the 
upper layers of the ocean, which 
might prove useful to the fishing in- 
dustry and other marine activities. 

Although of great economic value, 
present-day forecasts fall well short 
of perfection. Even modest improve- 
ments in accuracy would result in 
substantial additional benefits. With 
the new tools now available, espe- 
cially the meteorological satellite, op- 
portunities exist for increasing the 
accuracy of forecasts at all ranges — 
short, medium, and long. 

The Nature of Weather Prediction 

It is customary, and for some pur- 
poses useful, to divide the subject of 
weather prediction into three cate- 
gories: short, medium, and long 
term. These categories are generally 
understood to refer to time ranges of 
0-24 hours, 1-5 days, and beyond 
5 days (e.g., monthly and seasonal 
forecasts), respectively. While it is 
often convenient to discuss the fore- 
cast problem under these headings, 
it is important to realize that they 
do not necessarily represent logical 
divisions of the subject in terms of 
methodology employed, concepts in- 
volved, or phenomena treated. 

Weather prediction, as presently 
practiced, is actually a highly com- 
plex subject. It deals with such 
diverse phenomena as thunderstorms, 
tornadoes, hurricanes, and cyclonic 
storms, and with a wide variety of 
weather elements — wind, tempera- 
ture, and precipitation, to name a few 
of the more important. Moreover, it 
involves the use of an assortment of 
techniques, some based on human 
judgment, others founded on physical 
law and numerical computation. 
Weather forecasting is still a mixture 
of art and science, but a mixture in 
which the scientific ingredient is be- 
coming increasingly dominant as fun- 
damental understanding of the at- 
mosphere grows and more and more 
application is found for numerical 

In the following sections we will 
review the principal elements in- 
volved in prediction at different 
ranges, dividing the subject according 
to the pertinent phenomena. Figure 
IV-8 shows the geographical range, 
both latitudinally and in height, of 
data needed for forecasting. To aid 

in the discussion, it is desirable first 
to summarize briefly the methods 
employed in weather prediction. 

Prediction Methods 

Numerical Weather Prediction — 
This is the term applied to forecast 
methods in which high-speed digital 
computers are used to solve the 
physical equations governing atmos- 
pheric motions. In order to compute 
the future state of the atmosphere 
accurately, the initial or present state 
must be specified by observation. 
Numerical methods are most success- 
fully applied in predicting the be- 
havior of the synoptic-scale disturb- 
ances (cyclones, anticyclones, jet 
streams) of middle and high latitudes. 

Extrapolation — In this method, 
successive positions of the feature 
being forecast, for instance a low- 
pressure center, are mapped, and the 
future position is estimated by con- 
tinuing past displacements or trends. 
Since the advent of numerical weather 
prediction, this method has fallen into 
disuse in predicting motions of syn- 
optic systems, but it is still useful in 
other connections, for example, in 
predicting movements of individual 
thunderstorms seen on a radarscope. 

Steering — In the steering method, 
a smaller-scale weather system or 
feature is assumed to move with the 
direction and speed of a larger-scale 
current in which it is embedded. 
Thus, a hurricane may be displaced 
according to the broad-scale trade- 
wind current in its vicinity. The ac- 
curacy of the method depends on how 
well the basic steering assumption 
is satisfied and how accurately the 
steering current is known or pre- 






The diagram gives an indication of the data necessary for forecasts in the middle 
latitudes for varying lengths of the forecast period. It is important to note that both 
atmospheric and oceanographic data are needed for all forecast periods. 

ena selected on the basis of their 
practical importance: severe local 
storms (thunderstorms, hailstorms, 
and tornadoes); hurricanes; and syn- 
optic disturbances (cyclones, anticy- 
clones, and fronts and their asso- 
ciated upper-level troughs, ridges, 
and jet streams). 

Severe Local Storms — These 
storms develop with extreme rapidity 
and seldom have lifetimes of more 
than a few hours. On the basis of 
the large-scale temperature, moisture, 
and wind fields, and their expected 
changes, it is possible to delineate 
areas in which severe storms are 
likely to occur 6 to 12 hours in ad- 
vance, or sometimes even longer. But 
there is at present no way of predict- 
ing when and where an individual 
storm will develop. Once a storm 
has been detected, extrapolation and 
steering methods can be used to 
predict its motion with fair accuracy, 
but in view of the short lifetime of 
the typical storm, the forecast rarely 
holds for more than a few hours. 

Statistical Forecasting — Though 
statistical methods have wide appli- 
cation in forecasting, the term, as 
applied here, refers to any of a num- 
ber of techniques in which past data 
samples are employed to derive sta- 
tistical relationships between the 
variable being forecast and the same 
or other meteorological variables at 
an earlier time. The statistical method 
is particularly valuable in forecasting 
local phenomena that are too complex 
or too poorly understood to be 
treated by numerical or physical 
methods but that experience has 
shown to be related to identifiable, 
antecedent causes. 

The Analogue Method — The aim 
of this method is to find a previous 
weather situation which resembles 
the current situation and to use the 
outcome of the earlier case to deter- 
mine the present forecast. The 
method has the advantage of sim- 
plicity, but its usefulness is extremely 
limited since sufficiently close ana- 

logues are difficult to find, even when 
long weather records are available. 

Mixed Methods — Combinations 
of the foregoing methods are quite 
common. Thus, surface temperature 
is customarily forecast by a combina- 
tion of numerical and statistical tech- 
niques in order to obtain better 
predictions than would be obtained 
from use of the numerical method 

Short-Range Prediction 

The problems encountered, meth- 
ods employed, and the time period 
for which accurate predictions can 
be made differ according to the phe- 
nomenon or scale of motion being 
forecast. It is therefore convenient 
to discuss the subject on the basis 
of different types of weather systems 
involved. To keep the subject within 
reasonable limits, the discussion will 
be limited to the following phenom- 

Weather radar is the most valuable 
tool in severe-storm detection, and 
it is only since the introduction of 
radar that adequate monitoring of 
severe storms has been possible. 
Geostationary satellites also have 
great potential usefulness in identify- 
ing and tracking these systems. Until 
there is full radar coverage of the 
United States and permanent surveil- 
lance by geostationary satellite with 
both visual and infrared sensing ca- 
pability, short-range prediction of 
severe storms will not have reached 
the limits of accuracy allowed by the 
present state of the art. 

Ultimately, one may hope that the 
methods of numerical weather pre- 
diction used so successfully with 
larger-scale storms will be applied to 
thunderstorms and other small-scale 
phenomena. But there seems no clear 
way of achieving this hope in the 
foreseeable future. To forecast these 
phenomena by numerical methods re- 
quires observations of the basic me- 
teorological variables — wind, tem- 



perature, and moisture — at spatial 
intervals of less than a kilometer and 
nearly continuously in time (cf., pre- 
sent spacing of about 400 km. and 
time intervals of 12 hours). Eco- 
nomically, this is a prohibitive re- 
quirement, quite apart from its prac- 
tical feasibility in terms of the 
instrumentation and observing sys- 
tems currently envisaged. 

Despite the present hopelessness 
of straightforward applications of 
physical-numerical methods to the 
prediction of small-scale phenomena, 
there is no doubt that opportuni- 
ties exist for improved forecasting 
through properly directed research 
efforts. New developments in instru- 
mentation and measuring systems — 
doppler and acoustic radars and the 
geostationary satellites, to mention 
the most promising — utilized in con- 
junction with special observing pro- 
grams planned for the future, offer 
great opportunities for advancing un- 
derstanding of severe storms. From 
this understanding, improved tech- 
niques are bound to emerge. For in- 
stance, it has been found that, unlike 
the typical thunderstorm, very large 
thunderstorms tend to move to the 
right and slower than the steering 
current. A better physical under- 
standing of the cause of this behavior 
would undoubtedly lead to superior 
forecast techniques. 

Hurricanes — Hurricane prediction 
has improved steadily during the past 
decade or two. The improvement has 
been brought about by the use of 
aerial reconnaissance, radar, and, 
more recently, meteorological satel- 
lites to detect and track the hurricanes 
and by the development of better 
techniques for predicting their move- 
ment. Skill is still largely lacking in 
forecasting their development, but 
fortunately they form sufficiently 
slowly and usually far enough away 
from land areas that the development 
problem is seldom critical. 

In the past, extrapolation and steer- 
ing methods have been the mainstays 
in predicting hurricane movement. 

Currently, the most accurate method 
is a statistical one that uses past 
weather records to derive regression 
equations relating future movement 
to previous movement and to various 
measures of the large-scale atmos- 
pheric structure in the region sur- 
rounding the hurricane. With this 
method, hurricane positions can be 
predicted 24 hours in advance with 
an average error of about 100 nauti- 
cal miles. While this figure leaves 
considerable room for improvement, 
there can be no doubt about the 
enormous value of current forecasts 
in terms of lives saved and property 
damage reduced. 

Further refinement of the statistical 
method and better observations of 
the broad-scale features of the hurri- 
cane environment could lead to some 
improvement in hurricane prediction, 
but it seems likely that the statistical 
method has already approached its 
limits of accuracy. Development of 
numerical prediction methods would 
seem to hold the key to further prog- 
ress in this area. Methods of numeri- 
cal prediction have already been tried 
which forecast the large-scale steering 
flow in the vicinity of hurricanes and 
thereby allow better use of the steer- 
ing principle. These methods have 
met with some degree of success, 
yielding errors comparable to, or 
slightly larger than, the statistical 

More significant and promising for 
the future has been the development 
in recent years of theoretical models 
which, starting from assumed initial 
conditions, are able to simulate many 
important features of hurricanes. 
These models have reached the stage 
where they could be tested routinely 
in the atmosphere if the proper initial 
data — i.e., observations of wind, 
temperature, and humidity at suffi- 
ciently close intervals to resolve the 
atmospheric structure in and near the 
hurricane — were available. The in- 
terval required is 100 kilometers or 
less, well beyond present observa- 
tional capability. However, it is con- 
ceivable that geostationary satellites 

with visual and infrared sensoi 
eluding sounders, could go a long 
way toward providing the type of 
information needed for carrying out 
physical-numerical prediction of hur- 
ricane formation, movement, and in- 

Despite these promising theoretical 
and observational developments, it 
would be premature to enter on a 
crash program of hurricane predic- 
tion. Emphasis now must be put on 
improving the physical basis of hur- 
ricane models and on developing the 
full potential of the geostationary 
satellite as an observing platform. 
Tropical field experiments planned as 
part of the Global Atmospheric Re- 
search Program (GARP) will assist 
theoretical studies of hurricanes by 
providing data suitable for investi- 
gating the nature of the interaction 
of mesoscale convective phenomena 
with the larger-scale flow patterns of 
the tropics. 

Synoptic Systems — During the 
past decade or two, thanks to the in- 
troduction of high-speed computers 
and the development of numerical 
weather prediction, remarkable prog- 
ress has been made in predicting the 
genesis and movement of high and 
low pressure systems and tropos- 
pheric circulation features in general. 
Prognostic weather maps prepared by 
computer now surpass the efforts of 
even the most skilled and experienced 

Despite these successes, short-range 
forecasts of specific weather elements 
often leave much to be desired. In 
part, the shortcomings are due to 
small-scale phenomena which, as ex- 
plained earlier, are not predictable, 
except in a statistical sense, more than 
a few hours in advance. But, in con- 
siderable measure, they can also be 
attributed to deficiencies or limita- 
tions in the numerical prediction 
models. The models are most suc- 
cessful in predicting pressure and 
wind fields; they are less successful 
in predicting cloud and precipitation 
amounts and patterns and in answer- 



ing such critical problems as whether 
precipitation will be in the form of 
rain or snow. These problems, in- 
volving interactions of wind, tem- 
perature, and moisture fields, are of a 
different order of difficulty. Short- 
range predictions also suffer some- 
what from data deficiencies, particu- 
larly in oceanic and adjoining regions. 
Satellite observations have, however, 
alleviated the data deficiencies to a 
considerable degree in recent years. 

There are several avenues for ad- 
vancing the science of short-range 
prediction of synoptic-scale phenom- 
ena; all of them are being actively 
pursued and deserve encouragement. 
First, fine-grid scale models are being 
developed which accept data at grid 
intervals of half or less the current 
standard mesh length of about 400 
kilometers. Use of a finer grid per- 
mits better resolution and more accu- 
rate depiction of the synoptic patterns 
and improves the accuracy of the 
computational procedures. Unfortu- 
nately, the presently available obser- 
vations are not ideally suited for 
fine-grid computation. Though a net- 
work of surface observations exists 
which makes it possible to represent 
surface weather features more pre- 
cisely than is presently done in nu- 
merical prediction, no corresponding 
closely spaced upper air observations 
are available. High-resolution, scan- 
ning radiometric sounders aboard 
satellites offer a promising means of 
overcoming this gap, and every effort 
should be made to speed their devel- 
opment and application. Data from 
more advanced satellites can also be 
expected to improve further the qual- 
ity of ocean analysis, and thereby 
contribute to better short-range fore- 
casts over ocean areas and adjacent 
coastal regions. 

Another important avenue for ad- 
vancing short-range prediction is 
through continued efforts at improv- 
ing the physical basis of the predic- 
tion models. Such efforts can be 
carried out in part bv theoretical 
means, using presently available 
knowledge of the physical processes. 

But they will also almost certainly 
require the acquisition of special data 
sets of the sort planned under GARP 
and other large observational pro- 
grams. Better modeling of the physi- 
cal processes will not only widen the 
scope of the phenomena that can be 
forecast successfully by objective 
means but will result in greater ac- 
curacy of the forecast as a whole. 

A final important new direction in 
short-range prediction is in modeling 
of the near surface layer. This is the 
layer that affects man most directly. 
Accurate predictions of its structure 
will contribute to successful predic- 
tions of the dispersal of pollutants in 
the atmosphere, and of fog and other 
visibility- and ceiling-reducing fac- 
tors that hamper aircraft operations. 
Modeling of this layer is a difficult 
undertaking, since its characteristics 
and behavior are controlled in large 
measure by turbulent processes. Both 
theoretical work and field observa- 
tional programs will be required to 
advance this effort. We are still a 
long way from being able to make 
surface-layer prediction a part of the 
routine prognosis. 

Medium-Range Prediction 

During the past dozen years, the 
greatest gains in forecast skill have 
probably occurred at medium range 
(1-5 days). These gains are the direct 
outcome of the development and ap- 
plication of numerical prediction 
models capable of forecasting the for- 
mation and movement of synoptic- 
scale weather systems. The method 
differs in no way from that described 
in connection with short-range pre- 
diction; it is simply extended for a 
longer period. 

Surface weather predictions are 
now quite satisfactory for periods of 
about 48 hours. Upper-level prog- 
noses show some degree of skill for 
periods as long as three to five days. 
Again, pressure and wind patterns 
are better forecast than such elements 
as precipitation. At medium ranges 

it is still possible to infer likely areas 
of convective activity — thunder- 
storms and the like — but prediction 
of individual small-scale disturbances 
is completely beyond the realm of 

Numerical experiments conducted 
as part of GARP suggest that it is 
possible, in principle, to forecast day- 
to-day weather changes for periods 
as long as two to three weeks in ad- 
vance — though some critics feel this 
is an excessive figure in terms of 
what the public would judge to be 
successful forecasting. In any event, 
there is good reason to believe that 
useful forecasts can be made by nu- 
merical methods for periods well in 
excess of the present three-to-five day 

The main obstacles in the way of 
increasing the time range of forecasts 
(and thereby also their accuracy, even 
at shorter ranges) are the lack of an 
adequate observational network on a 
worldwide basis and deficiencies in 
the physical formulation of the pre- 
diction models. A principal aim of 
GARP is to overcome these observa- 
tional and physical shortcomings. 

A number of areas or industries 
have been identified in which more 
accurate predictions in the five- to 
twenty-day range would result in 
great economic benefit. Among these 
are agriculture, transportation, public 
utilities, and the construction and 
fishing industries. 

Long-Range Prediction 

Long-range prediction is a contro- 
versial subject. Its proponents make 
a variety of claims, ranging from the 
ability to forecast a given day's 
weather weeks or months in advance 
to the ability to forecast, with some 
small degree of skill, departures of 
temperature or precipitation from 
their monthly or seasonal means. 
Skeptics contend that the whole busi- 
ness is a waste of time, either that we 
do not know how to make long-range 
predictions or that long-range pre- 



diction is an impossibility. Where 
does the truth lie? 

First we might ask: Are there valid 
grounds for attempting long-range 
prediction? Here the answer is defi- 
nitely "yes." If weather changes were 
due exclusively to migratory synoptic- 
or smaller-scale weather systems, it 
is known from the GARP experiments 
cited previously that prediction would 
not be possible beyond two or three 
weeks. But it has long been recog- 
nized that there are larger-scale pat- 
terns in the atmosphere which tend to 
persist or recur over periods of weeks, 
months, or seasons. Drought episodes 
and prolonged spells of warm or cold 
weather may be cited as examples of 
such patterns. They are associated 
with abnormal features of the circu- 
lation — unusual displacements of the 
jet stream, the semi-permanent high 
and low pressure centers, and so 

Theories of Causation — The cause 
of long-period weather changes is a 
debatable subject. Many investiga- 
tors have sought to connect them to 
extraterrestrial events — to variations 
of solar radiation, in particular — but 
the evidence in favor of an extrater- 
restrial origin is not impressive. Other 
investigators have suggested that they 
are caused by complex feedback 
mechanisms within the atmosphere. 
This hypothesis cannot be discounted. 
In laboratory experiments with rotat- 

ing fluids, it has been found possible 
to generate long-period (on the time- 
scale of the model) circulation fluctua- 
tions even when external conditions 
are kept rigidly constant. 

A final theory, which has steadily 
gained support, attributes long-period 
weather variations to interactions of 
the atmosphere with surface features. 
Anomalies of sea-surface temperature 
and of snow cover are examples of 
conditions that are believed capable 
of producing and perpetuating abnor- 
mal weather situations. 

Forecasting Metlwds — Though 
some physical reasoning may enter 
into the formulation of a long-range 
forecast, the methods currently in use 
do not have a physical basis. The 
numerical methods applied at shorter 
ranges are not, as presently formu- 
lated, appropriate to long-range pre- 

Thus, main reliance is put on ex- 
trapolation, statistical, and analogue 
methods of forecasting, and human 
judgment plays a heavy role. The 
results obtained from these methods 
show at best only slight skill, and 
there seems little or no hope of sig- 
nificant improvement through their 
continued use and development. How- 
ever, in view of the great economic 
importance of long-range prediction 
and the growing evidence that a 
meaningful physical understanding 

of long-period atmospheric variations 
can be achieved, it is essential that 
efforts to derive more suitable quanti- 
tative methods of prediction be con- 
tinued and strengthened. 

Needed Scientific Activity — Ac- 
tivities of two types deserve particu- 
lar encouragement in this respect. 
First are programs to acquire the kind 
of global data needed for establishing 
the physical basis of long-range pre- 
diction. Such programs will have to 
endure for a long time and will not 
only have to measure the usual me- 
teorological variables employed in 
numerical prediction but will have to 
measure additional parameters such as 
sea-surface temperature, snow cover, 
and the like. It is apparent that ob- 
servations from satellites will be the 
key element in a global monitoring 

A second type of activity that 
merits vigorous support is experi- 
mental work in numerical modeling 
of the general circulation, of the sort 
now practiced by a number of groups 
in the United States. From such ex- 
periments, it may well be possible to 
discover the underlying causes of 
long-term weather and climatic anom- 
alies. In fact, the modeling experi- 
ments are essential to the observa- 
tional effort, for without them we can 
never be sure, until perhaps it is too 
late, that the proper variables are 
being measured. 

Long-Range Weather Forecasting 

Scientists who work in long-range 
weather forecasting encounter great 
difficulties, not only in the intricacies 
of their chosen field but also in get- 
ting across to other scientists and the 
lay public the essential nature of 
their problem and the reasons for 
their painfully slow progress in the 
modern-day milieu of satellites, com- 
puters, and atomic reactors. When 
solar eclipses can be predicted to frac- 
tions of a second and the position of 

a satellite pinpointed millions of miles 
out in space, it is not readily under- 
standable why reliable weather pre- 
dictions cannot be made for a week, 
month, season, or even a year in 
advance. Indeed, eminent scientists 
from disciplines other than meteorol- 
ogy, underestimating the complexity 
of the long-range problem, have tried 
to solve it only to come away with a 
feeling of humility in the face of 
what the late von Neumann used to 

call "the second most difficult prob- 
lem in the world" (human behavior 
presumably being the first). 

And yet, the potential economic 
value of reliable long-range forecasts 
probably exceeds that for short-range 
(daily) forecasts. Many groups need 
as much as a month or a season or 
more lead-time to adjust their plans. 
These include such diverse types as 
manufacturers (e.g., summer suits, 



raincoats, farm implements), fuel and 
power companies, agriculturalists, 
construction companies, and com- 
modity market men, to say nothing 
of vacationers. Aside from this, long- 
range forecasting, by setting the cli- 
matic background peculiar to a given 
month or season, is of distinct value 
to the short-range forecaster. For ex- 
ample, it can alert him to the likeli- 
hood of certain types of severe 
storms, including hurricanes, intense 
extratropical cyclones, and even broad 
areas most frequently vulnerable to 

Most of the needs of these groups 
for long-range forecasts cannot pres- 
ently be met, however, because of the 
low skill level of predictions or the 
inability to predict anomalous weather 
at ranges beyond a month or season. 
Why is the problem so intractable? 

The General Problem 

In the first place, long-range fore- 
casting requires routine observations 
of natural phenomena over vast areas 
— and by vast we mean at least 
hemisphere-wide coverage in three 
dimensions. More probably, the en- 
tire world's atmosphere, its oceans 
and its continents, must be surveyed 
because of large-scale interactions 
within a fluid that has no lateral 
boundaries but surrounds the entire 
earth. In contrast to the physicist, 
the meteorologist has no adequate 
laboratory in which to perform con- 
trolled experiments on this scale, al- 
though some recent work with elec- 
tronic computers holds out hope for 
useful simulation. 

Inadequate Observational Net- 
works — When the immense scale of 
the atmosphere is realized, it becomes 
clear that the present network of 
meteorological and oceanographic ob- 
servations is woefully inadequate. 
Even in temperate latitudes of the 
northern hemisphere, relatively well 
covered by surface and upper-air re- 
ports, there are "blind" areas of a 
size greater than that of the United 

States. The tropics are only sparsely 
covered by reports, and the data cov- 
erage in the southern hemisphere is 
poorer still. 

In the southern hemisphere, a moat 
thousands of miles in diameter sepa- 
rates the data-rich antarctic continent 
from the temperate latitudes, making 
it virtually impossible to get a coor- 
dinated picture of what is occurring 
now, let alone what may occur in the 
future. The "secrets of long-range 
forecasting locked in Antarctica" — a 
cliche often found in press articles — 
are indeed securely locked. Of course, 
cloud and radiation observations from 
satellites are assisting to an ever in- 
creasing degree, but better methods 
of determining the atmosphere's pres- 
sure, wind, and temperature distribu- 
tion from satellite and other types of 
observations are urgently needed. 

Inadequate Understanding — Even 
if every cubic mile of the atmosphere 
up to a height of 20 kilometers were 
continuously surveyed, however (and 
there are 2,500 million such volumes), 
reliable long-range forecasts would 
still not be realizable. Regardless of 
their frequency and density, observa- 
tions are not forecasts; they merely 
provide "input data" for extended 
forecasting. Meteorologists have yet 
to develop a sufficient understanding 
of the physics of the atmosphere and 
the ocean to use these input data 
effectively in long-range forecasting, 
although this understanding is un- 
likely to come about in the absence 
of such data. 

The Present Situation 

The Data Base — Today the data 
and facilities for making long-range 
forecasts, inadequate as they may be, 
are far better than ever. In addition 
to about 25,000 surface weather re- 
ports (22,000 over land and 3,000 
over sea) available each day at a 
center like Washington, there are 900 
balloon observations of wind direc- 
tion and speed, and 1,500 radiosonde 
observations of upper air pressure, 

temperature, and humidity and, fre- 
quently, wind. In the same 24-hour 
period about 1,300 aircraft reports, 
dozens of indirect soundings of up- 
per air temperatures made by the 
Nimbus-SIRS satellite system, and 
hundreds of satellite cloud photo- 
graphs are received. 

While these figures are impressive 
they are inadequate, especially be- 
cause they represent a most tineven 
geographical array of observations 
and neglect proper surveillance of the 
ocean. The vast blind areas are, un- 
fortunately, located in important wind 
and weather system-generating areas, 
like the northern Pacific Ocean, the 
tropics, and parts of the southern 
hemisphere. These systems, once 
generated, soon influence weather in 
distant areas around the world, their 
complex effects often traveling faster 
than the storms themselves. Hence, 
if an area is especially storm-prone 
during a particular winter, the storms 
will persistently influence other areas 
thousands of miles distant, sometimes 
leading to floods or droughts. Obvi- 
ously, if the wind and weather char- 
acteristics in the primary generating 
area are imperfectly observed one 
cannot hope to predict the distant 

As pointed out earlier, data alone, 
regardless of how extensive in space 
and how frequent in time, are not 
sufficient to insure reliable long-range 
forecasts. It does appear, however, 
that more data of special kind and 
accuracy are required if a successful 
solution is to be obtained. The kinds 
of data required and a rough estimate 
of the density will be discussed later. 

State of the Art — Forecasts can be 
made for future days by using elabo- 
rate numerico-dynamical methods and 
high-speed computers. In these meth- 
ods, one predicts various meteoro- 
logical elements at many levels for 
successive time-steps. The approach 
always begins with the initial condi- 
tions observed at many levels at a 
certain time over a large area like the 
northern hemisphere and forecasts 



for time-steps of about 15 minutes. 
Each iteration starts from the last 
prediction, and the forecast is carried 
forward for many days. 

Numerical predictions of this kind 
form the basis for the extended (5- 
day) forecasts made by the National 
Weather Service, an additional com- 
ponent being supplied by the experi- 
ence of the forecaster. How accurate 
are they? 

The skill of the final pressure- 
pattern predictions made by the pres- 
ent "man-machine mix" from two to 
six days in advance is shown in 

Figure IV-9 by the curve marked 
"Present." Without going into de- 
tails, 1.00 on the vertical scale implies 
perfect forecasts, and indicates fore- 
casts that are no better than maps 
randomly selected from the same 
month of past years. As can be seen 
in Figure IV-9, extended forecasts de- 
teriorate rapidly from day to day; by 
the sixth day, one might as well use 
the initial day's map as a forecast 
("persistence"). Even at the fourth 
day, the skill is low enough to be of 
marginal economic value. Assuming, 
however, that the present accuracy of 
forecasts for the fourth day are eco- 
nomically valuable, we might ask 




The graph shows the accuracy — and limitations — of National Weather Service 
forecasts of the pressure pattern for North America for the period March 1968 to 
February 1969. 

how good the two- to six-day predic- 
tions would have to be to give us 
accuracy equal to the four-day figure 
at two weeks, or 14 days in advance. 
These computed values are shown in 
the upper curve marked "Future." 
Thus, we see that a six-day forecast 
will have to be about as good as a 
two-day forecast is now. A six-day 
forecast will have to be about 25 
times better than at present (ratio of 
the squares of the six-day correlations 
for "Present" and "Future"), a four- 
day forecast about 10 times better. 

Prospects — These are tremendous 
strides that will have to be made, 
especially if one considers the frus- 
tratingly slow rate of progress in 
improving short-range weather fore- 
casts over the past twenty years. The 
situation suggests that some major 
breakthrough in understanding, and 
in the density and quality of observa- 
tions, must come about before de- 
tailed predictions in time and space 
out to two weeks or more will be 
realized. There is controversy in the 
meteorological community as to 
whether forecasts of this type will 
ever be possible. 

Yet the potential for economically 
valuable long-range predictions is not 
as bleak as might be gathered from 
this discussion. While the forecast 
for a given day well in advance may 
be greatly in error using the above 
method, the general weather charac- 
teristics of a period — say, the aver- 
age of computerized forecasts for the 
second or third week in advance — 
may turn out to contain economically 
valuable information. There is still 
no evidence this is so, but the hope 
exists that better and more observa- 
tions combined with more knowledge 
of atmospheric modeling will result 
in this advance. Numerical modeling 
may make a major spurt forward be- 
cause of the development of a first 
model aimed at coupling air and sea. 

What Needs To Be Done 

In order to bring about this prog- 
ress and raise the level of the "Pres- 



ent" curve in Figure IV-9, a vast 
World Weather Watch (WWW) pro- 
gram to acquire an adequate network 
of observations has been set in mo- 
tion by the WMO (World Meteoro- 
logical Organization) and a compan- 
ion research arm, GARP (Global 
Atmospheric Research Program), un- 
der IC5U (International Council of 
Scientific Unions) and WMO. The 
aims, rationale, and scope of these 
undertakings have been well docu- 
mented in many reports and will not 
be reiterated here; suffice to say that 
a satisfactory solution of long-range 
forecasting problems is not likely to 
come about without them. 

Statistical Aggregation — Neverthe- 
less, the future of long-range weather 
forecasting does not and should not 
depend solely on the possibilities in- 
herent in the iterative approach de- 
scribed earlier. Virtually every group 
of meteorologists that has attacked 
this problem over the past century 
has done so by working not with 
short time-step iterations but, rather, 
by studying statistical ensembles and 
the evolution of average wind and 
weather systems — e.g., from month 
to month and season to season. The 
long-range forecasting services of the 
Soviet Union, England, Japan, and the 
United States operate with statistical 
aggregates as well as physical meth- 
ods. In the statistical approach, it is 
taken for granted that the average 
prevailing wind and weather patterns 
for one month, together with the 
associated abnormalities of sea tem- 
peratures and land surfaces (e.g., 
covered or free of snow), largely de- 
termine how the general weather pat- 
terns are going to develop during the 
following month under the influence 
of the solar radiation appropriate to 
time of year. A small effort in nu- 
merical modeling using this philoso- 
phy has begun. 

How good are long-range predic- 
tions by conventional non-iterative 
methods? This is a question of scien- 
tific as well as practical importance, 
because any positive skill over and 
above climatological probability im- 

plies knowledge that ought to funnel 
into further research and thereby lead 
to more reliable prediction. The pres- 
ent skill at forecasting departures 
from normal of average temperature 
at 100 cities over the United States 
for 5-day, 30-day, and experimental 
seasonal forecasts may be roughly 
given as 75, 61, and 58 percent, re- 
spectively, if chance is defined as 50 
percent. Similarly, for precipitation, 
5-day, 30-day, and seasonal forecasts 
average roughly 5<5, 52, and 51 per- 
cent, respectively. While these skills 
are far from perfect they do indicate, 
particularly for temperature, that the 
methods contain some knowledge of 
long-term atmospheric behavior. The 
5- and 30-day forecasts that are re- 
leased to the public appear to be of 
definite economic value, judging from 
hundreds of comments by users and 
also from their reaction when the 
forecasts are not received on time. 

Despite the work and performance 
of many groups around the world 
along these practical lines and the 
fact that their forecasts show some 
small but definite skill in long-range 
prediction (contrasted with the utter 
failure, up to now, of dynamical 
iterative models at periods up to 
a month), the statistical-physical- 
synoptic (synoptic here meaning an 
over-all view with the help of maps) 
approach has been relatively neg- 
lected by meteorologists in the United 

The Role of Oceanography — 
Oceanographers may see the long- 
range problem more clearly than me- 
teorologists as one in which statistical 
aggregates play an important part — 
both in forecasting general thermal 
conditions in the sea and in forecast- 
ing its long-period interaction with 
the atmosphere. Perhaps this is be- 
cause large-scale changes in the sea 
take place much more slowly (about 
ten times more slowly) than in the 
atmosphere and the reasons can 
therefore provide a sort of memory 
bank for the atmosphere. 

In the past decade, research has 
shown that the thermal state of the 

oceans, especially the temperatures in 
the upper few hundred meters, varies 
considerably from month to month 
and year to year, and that these vari- 
ations are both cause and result of 
disturbed weather conditions over 
areas thousands of miles square. By 
complex teleconnected processes, the 
effects of these disturbed conditions 
are transmitted to areas thousands of 
miles distant. Thus, the prevailing 
wind systems of the globe — the 
westerlies, the trade winds, and the 
jet streams — may be forced into 
highly abnormal patterns with con- 
comitant abnormalities of weather. 
Because these reservoirs of anoma- 
lous heat in the ocean are deep, often 
up to 500 meters, and may last for 
long periods of time, the atmosphere 
can be forced into long spells of 
"unusual" weather, sometimes re- 
sulting in regional droughts or heavy 
rains over periods ranging from 
months to seasons, and even years or 

Potential Lines of Action — The 
interface between meteorology and 
oceanography is thus a promising 
area which should receive more at- 
tention. Several items are needed: 

1. A network of observations for 
both air and sea measurements 
over the world's oceans, or at 
least over the Pacific Ocean 
where much of the world's 
weather appears to be gener- 
ated. This network can be a 
mix of ocean weather ships, spe- 
cially equipped merchant ships, 
and — particularly — un- 
manned, instrumented buoys 
which have now been demon- 
strated to be feasible. A net- 
work of observations about 
500 kilometers apart would be 
adequate as a start; later, the 
data gathered could indicate 
whether a finer or coarser grid 
is necessary. Satellite measure- 
ments can supplement but can- 
not replace these observations, 
particularly the subsurface ones 
which monitor the heat reser- 



voirs of the sea and give in- 
formation on the ocean currents. 

2. "Air-sea interaction" should be 
more than a catch phrase. It 
is a subject which must occupy 
the efforts of the best young 
men in geophysics today. 
Equally important is meteorolo- 
gist-oceanographer interaction. 
These men must not be steered 
only into narrow avenues 
where they lose sight of the 
big problems that lie at the 
heart of long-range prediction. 
Special seminars and inclusion 
into academic curricula of large- 
scale air-sea problems on long 
time-scales (months, seasons, 
and decades) are necessary de- 
spite the imprecise knowledge 
relative to short-period phe- 
nomena and short-range nu- 
merical weather prediction. 

3. Special attempts are needed to 
bring meteorologists and ocean- 
ographers together more fre- 

quently in universities and 
laboratories where they can 
analyze oceanographic and me- 
teorological data in real time, 
conduct joint discussions of 
what went on and is going on, 
and try to predict what will go 
on in subsequent months. This 
will involve computers and 
much research, but the research 
effort will be sparked by the 
satisfaction of seeing one's pre- 
dictions verified. This type of 
stimulus has been largely miss- 
ing in the oceanographic com- 
munity, where oceanographers 
have had to work on restricted 
problems mainly with data 
months or years old or with 
series of observations embrac- 
ing a small area. 

These same observations and pro- 
cedures, and their exploitation, will 
assist in most of ocean-air inquiry, 
whether iterative or non-iterative 
methods are employed. The ultimate 
long-range prediction scheme will 
probably be a combination of all 

three facets - 
and synoptic. 


c aj 

Whether science will be able to 
achieve appreciable skill in long- 
range weather prediction should be 
known in the next ten to twenty 
years, providing enough trained 
people are efficiently employed and 
adequate data, as suggested by the 
WWW and GARP programs, become 
available. If, however, an unbalanced 
program is embarked upon, with little 
or no use made of statistics and 
synoptics, it is unlikely that good, 
practical long-range forecasts will be 
achieved. Considering the rate of 
progress already achieved despite the 
complexity of the problem, the small 
number of scientists who have at- 
tacked it, and the inadequacy of data 
and tools in the pre-computer age, 
the outlook is optimistic, particularly 
in view of the WWW and GARP 
programs. General forecasts for pe- 
riods up to a year in advance are 
quite within reach; even the general 
character of the coming decade's 
weather may be foretold in advance. 

Short-Term Forecasting, Including Forecasting 
for Low-Altitude Aviation 

Substantial progress has been made 
during the past two or three decades 
in the nonclassical, exotic areas of 
the atmospheric sciences and their 
applications. From a state of almost 
no knowledge of the characteristics 
of the atmosphere between 10 and 
30 kilometers, rawinsonde networks 
and high-flying, instrumented air- 
craft have enabled us to produce 
excellent analyses and prognoses over 
most of the northern hemisphere. 
Rocketsonde programs are greatly 
expanding our knowledge of the at- 
mosphere from 30 to 100 kilometers. 
Meteorological satellites promise to 
enable the meteorologist to expand 
his charts to cover the globe. Fur- 
thermore, the speed and capacity of 
the electronic computer make it pos- 
sible for his charts to be prepared in 

time for practical use. The machines 
produce upper-air wind and tempera- 
ture fields that are as accurate as those 
of an experienced meteorologist. 

These and similar advances have 
great practical value. For example, 
sophisticated climatic techniques per- 
mit introduction of the weather fac- 
tor into construction planning and 
other operations. Twenty years ago, 
Fawbush and Miller, in Oklahoma, 
began what is now a successful na- 
tional program of advising the public 
of threatening weather such as tor- 
nadoes and hailstorms. Weather- 
modification programs at military air- 
fields near Spokane and Anchorage 
have all but eliminated air-traffic de- 
lays due to wintertime supercooled 
fog at those locations. 

Air-pollution research and opera- 
tions promise to benefit planning for 
industrial and residential areas, warn- 
ing the public of impending high 
pollution levels, and locating pollu- 
tion sources. Simulations of at- 
mospheric circulation features and 
weather-modification efforts are be- 
ginning to enable atmospheric sci- 
entists to assess, quickly and rela- 
tively economically, the effects of 
deliberate or inadvertent modifica- 
tions in the structure or dynamics of 
meteorological features through a 
wide range of scales. 

Progress has also been great with 
respect to sheer volume of output 
in both the classical and exotic areas 
of the atmospheric sciences, thanks 
both to the electronic computer and 



other modern methods of communi- 
cation and to improvements in or- 
ganization and management. 

Evaluation of Forecast 

The "bread and butter" products 
of the meteorologists, however, are 
the hour-to-hour and day-to-day 
forecasts of rain, snow, and tem- 
perature for the general public and 
of airfield and low-level flying 
weather. These have not fared so 
well. Reliable tigures to demonstrate 
improvement of accuracy over the 
past few decades are not available 
for scientific judgment of the per- 
formance of the meteorological com- 
munity. Statistics do exist for a 
large number of forecast targets 
(cities, airports) for a limited time 
period and for a few selected targets 
for two or three decades. However, 
this sparse data sample (which may 
be quite misleading) and subjective 
evaluations over the years suggest 
that, in an over-all sense, short- 
period forecasts have demonstrated 
little improvement for several de- 

Routine Forecasts of Temperature 
and Precipitation — Between 1942 
and 1965, for example, the Chicago 
office of the National Weather Service 
(NWS) showed a steady improve- 
ment in their combined weather and 
temperature forecasts of about .33 
percent per year. Large temperature- 
forecast errors (10° F. or more) made 
by the Salt Lake City office decreased 
from one such error every 6 days 
to one each 14 days. (Statistics for 
the over-all temperature-prediction 
capability of this office are not avail- 
able.) A study of 260 NWS stations 
discerned no noticeable change in 
the ability to forecast rain "today" 
during the first half of the 1960's, 
but did note an increase of about 
3 percent in the number of accurate 
predictions of rain "tonight" and 
"tomorrow." Scattered data such as 
these suggest that the accuracy of 
routine, classical forecasts of tem- 

perature and precipitation has in- 
creased — but only very slowly. 

Hurricane and Typhoon Posi- 
tions — Forecasts for special types 
of weather events in some, if not 
most, cases have fared better. For 
example, from 1955 to 1965, the 
NWS's 24-hour forecasts of hurricane 
positions improved from an average 
error of about 125 nautical miles to 
one of about 110 nautical miles. 
With regard to similar forecasts for 
typhoons in the western Pacific, made 
jointly by the Air Force and Navy 
weather services, errors diminished 
from nearly 170 nautical miles in 
the mid-1950's to about 110 in 1969. 

Winds — Forecasts of winds for 
high-flying aircraft are in the "new" 
and specialized area. Between the 
early and late 1960's, wind-prediction 
errors at 20,000 feet dropped from 
over 15 to under 11 knots. With 
regard to similar forecasts for low- 
flying aircraft (e.g., at 5,000 feet) 
which, although part of a specialized 
activity, can hardly be classed as ex- 
otic, the reduction was about one- 
half that for the higher level. 

Visibility and Cloud Cover — The 
predictions of airfield ceiling and 
visibility made by the Air Weather 
Service (U.S. Air Force) are repre- 
sentative of those made by other 
services. Their statistics for the pe- 
riod January 1968 through January 
1970, compiled from the records of 
200-odd airfields, show a small im- 
provement, not necessarily repre- 
sentative of performance improve- 
ments of previous years. The accuracy 
increased between 3 and 4 percent 
for forecasts with time ranges of 3, 
6, 12, and 24 hours. By 1970, the 
forecasts were better than persistence 
(no change from "time of observa- 
tion") by nearly 4 percent at 3 hours 
and nearly 8 percent at 24 hours. 
Statistics for predictions of low-level, 
in-flight clouds and weather are not 
available, but are likely to be about 
the same as those for airfield con- 

Verification systems used for the 
kinds of forecasts discussed above 
necessarily vary considerably. Opin- 
ions of atmospheric scientists regard- 
ing the representativeness of the 
data, and the value of the methods, 
also differ widely. On the whole, 
however, it can be said that the 
status of forecasting is about the 
same for cities, airfields, and low- 
level flying — on the average not 
bad, on occasion seriously deficient, 
and improving very slowly. 

Factors Responsible for 
Improvements in Forecasting 

The Norwegian Theory — The Nor- 
wegian air-mass and frontal theory, 
developed around 1920, began to 
influence meteorological research and 
application on a large scale by the 
late l Q 30's. It represented a scien- 
tific and conceptual revolution that 
substantially improved the capabili- 
ties of the atmospheric scientist. The 
Norwegian theory was largely sub- 
jective, and its application relied on 
the individual skill and imagination 
of trained and experienced practi- 
tioners. A large part of the theory 
was concerned with the distribution 
and intensity of rain, surface tem- 
perature and wind changes, and 
cloudiness — elements that directly 
influence man in his daily activities. 

The Rossby Theory — A second 
revolution in concept was initiated 
by Rossby in the 1940's. In con- 
trast to the Norwegian theory, Ross- 
by's approach emphasized the im- 
portance of upper-level wind and 
temperature patterns, whose influ- 
ences on sensible weather were broad 
and ill-defined. The theory was ob- 
jective and lent itself to mathematical 
calculation. Almost at the outset, 
after refinements by a number of 
atmospheric scientists, Rossby's basic 
theory began to produce usable prog- 
noses of upper-level wind and tem- 
perature fields. Further refinements 
produced relatively large-scale fields 
of vertical air-movement from which 
it has become possible to predict 



broad areas of cloud and precipita- 
tion with measurable skill. At tirst, 
the necessarily large volume of data 
was processed manually; with the 
arrival of electronic computers in the 
mid-1950's, processing could be 
completed in a few hours. 

This approach to research and pre- 
diction caught the fancy of most 
modern atmospheric scientists. Their 
fascination with an objective system 
that really worked — together with, 
in a sense, a commitment to large, 
expensive computer systems — has 
brought into being a breed of sci- 
entist different from those of pre- 
Rossby days. This new approach has 
strengths, but it also has weaknesses. 
On the one hand, real progress has 
been made in predicting for high- 
altitude jet aircraft and even, hope- 
fully, in forecasting large-scale at- 
mospheric features several days 
ahead of time. On the other hand, 
de-emphasis of the Norwegian theory 
has, if anything, degraded the mete- 
orologist's ability to deal with the 
small-scale atmospheric patterns as- 
sociated with weather at or near the 
earth's surface. 

The current approach has made 
some inroads on the short-range, 
small-area problem. In the past three 
or four years, programs employing 
closer grid networks and more at- 
tention to the vertical variation of 
low-level meteorological elements 
have increased the detail of com- 
puter-produced prognoses. In recent 
tests, three-dimensional air-trajectory 
computer programs have increased 
the accuracy of forecasts of airfield 
weather by a few percent in selected 
geographic areas. 

Technological Contributions — 
Most of the small increases in short- 
period weather forecasts of the past 
decade or so are not attributable to 
the atmospheric sciences, however. 
Thus, speeded-up communications 
and computer-operations systems 
have brought the "data-observation 
time" closer to the "forecast time"; 

since short-period forecasts are more 
accurate than long-period ones, an 
improvement has been gained. Net- 
works of observation stations have 
gradually been augmented, benefi- 
cially realigned, and provided with 
improved instrumentation. New kinds 
of data, such as those from weather 
radar, have helped especially in very 
short period forecasting (minutes to 
hours) of clouds, precipitation, and 
severe weather. Improved Air 
Weather Service and other weather- 
reconnaissance planes have strength- 
ened the National Weather Service's 
diagnostic capability; they have been 
vital in pinpointing hurricane loca- 
tions and specifying their intensities. 
Judicious use of various stratifications 
of past weather data (climatology), 
again a technique requiring no mete- 
orological skill in its applications, has 
helped to reduce large errors in local 
forecasts. There have also been ad- 
vances in management practices, such 
as grouping specialized meteorolo- 
gists at locations where they can 
work uninterrupted by telephonic or 
face-to-face confrontations with their 
public or military customers. 

Satellites — The meteorological sat- 
ellite is the most significant innova- 
tion in the atmospheric sciences since 
the computer. By far its greatest 
contribution to date has been to 
provide the meteorologist with cloud- 
cover information on a global basis. 
Research on the use of infrared data 
obtained by satellite is growing; these 
data have real potential, but they 
have not yet contributed to improve- 
ment in routine short-period predic- 

The satellite has vastly increased 
day-to-day knowledge of existing 
cloud cover, which in turn has im- 
proved subjectively derived circula- 
tion patterns that embrace fronts, 
major storm centers (including hurri- 
canes), and other large-scale tropical 
features, and even some of the larger 
thunderstorms. It is sometimes feas- 
ible to deduce upper-level winds 
from observed cloud features. 

The satellites assist the lorecastcr 
to make predictions for an is such 
as the oceans and regions of the 
southern hemisphere, where data can- 
not be obtained by conventional 
means. In special cases — e.g., in 
overseas military operations and in 
flights over regions of the United 
States not covered by conventional 
data-gathering systems — they can 
be of much, occasionally vital, aid to 
the weather forecaster. Without 
rapid access to good-quality, recent 
satellite read-outs, however, the value 
of the data for short-period forecast- 
ing drops quickly. 

It must be remembered that satel- 
lites describe present conditions; the 
atmospheric scientist is still con- 
fronted with the classical problem of 
predicting how conditions will change. 
Furthermore, satellites do not meas- 
ure parameters beneath the tops of 
clouds (except for thin cirrus). 

Actions to Improve the 
Value of Forecasts 

As noted earlier, an adequate data 
base for evaluating weather forecast- 
ing does not exist. A satisfactory 
evaluation program also does not 
exist with respect to the community 
of atmospheric scientists. There are 
sporadic evaluation programs, but 
statistics for one kind of forecast do 
not necessarily apply to other kinds 
and proper assessments for long pe- 
riods of years are not available. Fur- 
ther, forecast-verification programs 
are normally conducted by the agen- 
cies that make the forecasts them- 
selves, leaving open the question of 

Operational Data Transmission ■ — 
Over data-sparse areas such as re- 
mote oceanic regions, airlines and 
other flying agencies are already test- 
ing the use of rapid, automated trans- 
mission of operational data via satel- 
lite to management centers. Selected 
meteorological data should be in- 
cluded and made available to appro- 



priate weather-forecasting stations. 
Similarly, in-flight weather data over 
the continental United States should 
be made available on call to stations 
making short-period predictions for 
the public, for airfields, and for low- 
level flying activities. The large num- 
bers of aircraft in U.S. airspace con- 
stitute existing platforms with the 
potential for providing much valuable 
data for short-period forecasting. 

As a general rule, the shorter the 
period of a forecast, the more de- 
tailed and dense (in three dimensions) 
should be the data used in the predic- 
tion process and the smaller the re- 
quired area to be represented by the 
data. The current rawinsonde net- 
work over the United States is excel- 
lent for long-period forecasts, but as 
the period decreases to half a day or 
less the density of observations be- 
gins to leave much to be desired. 
Further, upper-air wind analysis and 
prognoses prepared by the computer 
and used by the forecaster are 

smoothed in the computational proc- 

Computer Models — Efforts to pro- 
duce lower-troposphere computer 
models with finer and finer meshes 
should be expanded, since work done 
to date has already shown some 
gain. Development of adequate dis- 
play techniques should accompany 
these efforts. 

Radar and Satellite Data — In- 
creased emphasis should be placed 
on better utilization of radar informa- 
tion, including digital processing and 
use of interactive graphics to display 
data and to integrate them with other 
kinds of information. 

Greatly increased research should 
be conducted to apply meteorological 
satellite data to the short-term fore- 
cast problem. 

The Man-Machine Mix — Consid- 
erably greater effort should be di- 
rected toward the man-machine mix 

in forecasting. There should be 
greater exploitation of the valuable — 
albeit subjective — Norwegian theory 
of air masses and fronts. Digital 
graphics offer significant potential. 

Microminiaturization should be em- 
phasized in the development of new 
sensing and processing equipment in 
the interests of reducing lag times of 
sensors as well as of reducing the 
weight of equipment that must be 
borne on aircraft, rockets, or bal- 

Regardless of research directed at 
improving short-period forecasting, 
however, progress will almost in- 
evitably be slow (except for new 
kinds of applications) because of the 
chaotic nature of smaller-scale at- 
mospheric phenomena and because 
meteorologists are required to state 
certain kinds of prediction in prob- 
abilistic terms. In some areas the 
state of the art appears to have 
reached a plateau; if this is so, what 
are needed are breakthroughs. 



Clear Air Turbulence and Atmospheric Processes 

Understanding of atmospheric 
processes appears to decrease rapidly 
with decreasing scale or typical size 
of the phenomena considered. Thus, 
it has only recently been recognized 
that turbulence in clear air in the 
upper troposphere and lower strato- 
sphere is an important part of the 
energy cycle of the atmosphere. 

Although motions in the atmo- 
sphere at scales less than a kilometer 
are often turbulent to some degree, 
the occasional outbreaks of moderate 
or severe turbulence that have 
plagued aviation for the past decade 
or more have important implications 
for the study and prediction of large- 
scale atmospheric motion. 

These motions are a result of dif- 
ferential heating. In the process of 
attempting to restore a uniform dis- 
tribution of heat in the atmosphere, 
the motions and processes of the 
atmosphere create narrow layers in 
which both wind and temperature 
variations are concentrated. The 
sharpest of these occur in the boun- 
dary layer, in the fronts associated 
with weather systems, and in the 
vicinity of the jet stream near the 

In each of these regions of strong 
gradients, turbulence typically occurs 
when the gradients become strong 
enough. The turbulent motions cause 
mixing and tend to smooth the varia- 
tions of wind and temperature. In 
the process, a considerable amount of 
heat and momentum may be trans- 
ported from one region to another, 
and with all turbulence there is a 
conversion of kinetic energy to ther- 
mal energy. 

The basic cycle of events in the 
atmosphere may thus be viewed as 
a sequence in which: 

1. Large-scale gradients created by 
differential heating result in 
large-scale motions. 

2. The large-scale motions con- 
centrate the variations caused 
by this differential heating into 
narrow zones which now con- 
tain a significant fraction of the 
total variation. 

3. As the degree of concentration 
increases, turbulence arises in 
these zones, destroying the 
strong variations and thus mod- 
ifying the larger-scale structure 
of the atmosphere. 

In this sense, turbulence in the zones 
of concentrated variation is an essen- 
tial part of the thermodynamic proc- 
esses of the atmosphere. 

Atmospheric scientists have long 
known that both the transport of 
heat and momentum and the dissipa- 
tion of kinetic energy were strong 
in the boundary layer and in frontal 
regions. The importance of these 
same processes in clear air turbulence 
near the jet stream is a recent dis- 

Perhaps the most important prac- 
tical implication of this development 
concerns the feasibility of long-range 
numerical weather prediction. Such 
predictions cannot be reliable for ex- 
tended periods unless the computer 
models correctly simulate the energy 
budget or energy cycle of the atmo- 
sphere. It now appears likely that this 
cannot be done without taking ac- 
count of the role of clear air turbu- 
lence — a phenomenon of too small 
a scale to be revealed by present 
standard sounding techniques or to 
be represented directly with the data 
fields used in the computer models. 

The importance of clear air tur- 
bulence in the energy budget is illus- 
trated by its contribution to the rate 
of dissipation of kinetic energy in the 
atmosphere. Although the exact 
value is subject to some controversy, 
the total dissipation rate probably 
will be somewhere in the range 5 to 
8 watts per square meter, of which 
2 to 3 watts per square meter prob- 
ably occurs in the boundary layer. 
Studies of dissipation by Kung, using 
standard meteorological data, and by 
Trout and Panofsky, using aircraft 
data, both arrive at an estimate of 
1.3 watts per square meter for the 
dissipation in the altitude range 
25,000 to 40,000 feet near the tropo- 

Thus, despite the present uncer- 
tainty of these estimates, it appears 
that the region near the tropopause 
contributes on the order of 20 per- 
cent of the total dissipation of the 
atmosphere. The rate of dissipation 
in severe turbulence is about 400 
times as large as that in air reported 
smooth by pilots and about 20 times 
as large as that in light turbulence. 
The estimates of Trout and Panofsky 
show that the light and moderate 
turbulence contributes the major 
fraction of the total dissipation in 
the layer near the tropopause; fur- 
thermore, their estimates show that 
about equal fractions of the dissipa- 
tion in this layer are probably due to 
the severe clear air turbulence and 
to the smooth air. (It should be 
noted that the estimate of the con- 
tribution of severe turbulence is un- 
doubtedly too low, because pilots 
attempt to avoid it if at all possible.) 

Available Observational Data 

Most of what is known about the 
structure of clear air turbulence and 



the regions in which it occurs results 
from investigations motivated by its 
impact on aviation. Clear air tur- 
bulence has caused injuries to crew 
members and passengers on commer- 
cial airlines, loss of control, and 
damage to aircraft structures. For 
these reasons, two main types of 
investigations have been carried out. 

Pilot Reports — In the first type, 
reports of civil and military pilots are 
used in conjunction with standard 
weather data in an attempt to derive 
a gross climatology of both the fre- 
quency of occurrence of clear air 
turbulence and its association with 
wind and temperature fields. These 
data are biased because pilots try to 
avoid clear air turbulence; further- 
more, the pilot reports are subjective 
and not uniform, due both to varying 
pilot temperament and to varying 
aircraft response to characteristics. 

Instrumented Aircraft — In the sec- 
ond, aircraft specially instrumented to 
measure the gust velocities compris- 
ing the clear air turbulence are flown 
into such regions. The resulting data 
have been analyzed in a variety of 
ways. These programs have con- 
tributed significant and valuable in- 
formation about the internal physics 
of the turbulent motion and about 
certain aspects of its statistical char- 
acteristics. The data are biased, how- 
ever, by the fact that turbulence was 
being sought by pilots; thus, they 
cannot be used directly to establish 
the frequency of occurrence of clear 
air turbulence. 

A more serious defect, from the 
scientific standpoint, is that these 
programs were conceived on the basis 
of the needs of aviation and aero- 
nautical engineering; they were not 
designed to reveal information about 
the physics of turbulence or its de- 
tails or interactions with larger-scale 
flows. Nevertheless, the available 
data could be used for scientific pur- 
poses more extensively than they 
have been. 

In the past few years attempts to 
conduct scientific studies of the phys- 
ics of clear air turbulence with spe- 
cially instrumented aircraft (in some 
cases with simultaneous use of 
ground-based radars) have been 
started in the United States, Canada, 
England, and the Soviet Union. Al- 
though the preliminary results from 
these programs appear both promis- 
ing and encouraging, no definitive 
body of knowledge has yet emerged. 
The problem is that the accuracy of 
data required for scientific study of 
the physics of clear air turbulence 
and its interactions with the envi- 
ronment leads to requirements for 
basic sensors that severely test, or 
even exceed, current instrumenta- 
tion capabilities. 

Theoretical Knowledge 

Despite these deficiencies in the 
collection of empirical data about 
clear air turbulence, there does appear 
to have been recent theoretical prog- 
ress. Atmospheric scientists have 
long suspected that clear air tur- 
bulence is primarily a result of a 
particular mode of fluid-flow in- 
stability that occurs when there is 
weak density stratification relative to 
rapid vertical variation in the flow 
velocity. This phenomenon has been 
modeled in the laboratory by Thorpe, 
seen under water in the Mediterra- 
nean by Woods, and the character- 
istic shape has appeared on the scopes 
of radars used in turbulence studies 
by Hardy, Glover, and Ottersten as 
well as in a few photographs taken 
when the process was made visible 
by clouds. The hypothesis that clear 
air turbulence is indeed a mani- 
festation of this particular fluid-flow 
instability provides an important con- 
ceptual basis for planning the struc- 
ture of empirical investigations. 

Recent work has also suggested 
that internal gravity waves in the at- 
mosphere may be linked with the 
formation of clear air turbulence. An 
interesting possibility is that the 

waves may be absorbed in shear 
layers, and thus may act as a trigger 
for the outbreak of turbulence. The 
fact that gravity waves are often 
generated by flow over mountains 
may explain why clear air turbulence 
occurs more frequently in mountain- 
ous regions. 

Basic Equations — Although the 
basic laws that govern clear air tur- 
bulence are the same mechanical 
and thermodynamic ones that apply 
to all fluid motion and can be ex- 
pressed mathematically, there has 
been little success in applying the 
equations to the problem. The main 
reason is that mathematical theories 
that provide solutions to these equa- 
tions do not seem to exist. The es- 
sential difficulty is that turbulence is 
a distinctly nonlinear process, and 
interactions on different scales are 
a crucial part of the physical phe- 

It is precisely this that makes clear 
air turbulence important to the en- 
ergy cycle of the atmosphere. The 
kinetic energy destroyed by the tur- 
bulence comes from the kinetic en- 
ergy of much larger-scale flows — 
those that we attempt to predict with 
numerical methods. The equations 
used in the computer models apply 
to averages of the variables over 
quite a large region, and should in- 
clude terms that express the effect 
of smaller-scale motions within the 
region of averaging upon the aver- 
aged variables. Here again, the 
mathematical form of the correct 
equations is known, but some prac- 
tical method must be found for rep- 
resenting in the models the contribu- 
tions to these terms from the intense 
processes occurring in both clouds 
and clear air. 

This probably can be accomplished 
for clear air turbulence only when a 
great deal more is known about its 
characteristics and its interactions 
with the large-scale processes. The 
most pressing need is for a thorough 
empirical study with aircraft, radar, 



and other means (perhaps laser tech- 
niques). Such a study has been 
recommended by the U.S. Committee 
for the Global Atmospheric Research 

Specific Questions About 
Clear Air Turbulence 

The major questions concerning 
clear air turbulence now requiring 
answers fall into three groups: 

First, questions concerning the 
origin or onset of clear air turbulence: 

1. Is clear air turbulence generally 
the result of a particular fluid- 
flow instability? If so, what are 
the crucial parameters of the in- 

2. What are the typical atmos- 
pheric features in which clear 
air turbulence occurs and how 
is their structure related to the 
parameters of fluid instabilities? 
(Of particular interest are the 
relationships to vertical wind 
shear, horizontal temperature 
gradients, and the Richardson 

3. Are other small-scale proc- 
esses, examples being gravity 
waves or local heating, impor- 
tant in the formation of clear 
air turbulence? 

Second, questions concerning the 
evolution of clear air turbulence: 

1. What is the precise evolution 
of the atmospheric variables at 

various scales during an out- 
break of clear air turbulence? 

2. How can this evolution be most 
economically summarized or de- 

3. What are the temporal charac- 
teristics of the transport of 
momentum and heat, the flux 
and dissipation of energy, and 
the stress imposed on the 
larger-scale flow during an 
outbreak of clear air turbu- 

4. What are the relationships be- 
tween processes occurring in 
one part of a patch of clear 
air turbulence and those of 
another? Are there relation- 
ships between apparently dis- 
tinct patches of turbulence? 

5. What characterizes the termina- 
tion of an outbreak of clear air 
turbulence? What scars does 
turbulence leave in its environ- 

Third, questions concerning the 
implication of clear air turbulence: 

1. How often do patches of clear 
air turbulence of various sizes 
and intensity actually occur in 
various regions of the atmos- 

2. What is the usual intensity of 
turbulence in the free atmos- 
phere in regions in which flight 
is sensibly smooth? 

3. How important is cleai 
bulence — quantitatively - 
the atmosphere's energy cycle 
compared to regions with the 
usual intensity of turbulence 
in air smooth for flight? 

4. How large are the terms ex- 
pressing the effects of clear air 
turbulence in the usual mete- 
orological equations (used for 
numerical prediction) compared 
to other terms? 

The urgent needs of aviation and 
aeronautical engineering for informa- 
tion on clear air turbulence and for 
reliable predictions of its occurrence 
will be finally and completely satis- 
fied only when a full scientific under- 
standing of the phenomenon is ob- 
tained. The same understanding will 
permit accurate determination of 
whether the effects of clear air tur- 
bulence must be incorporated in 
an attempt at extended numerical 
weather prediction. If this is neces- 
sary, and successful methods can be 
found, it will mark the crossing of 
a long plateau in attempts to under- 
stand the interactions of large- and 
small-scale motions. 

The economic and social benefits 
that would accrue from a capability 
for long-range weather prediction 
and the needs of aviation make it 
imperative that the importance and 
characteristics of clear air turbulence 
in the general circulation of the at- 
mosphere be investigated and com- 

Prediction and Detection of Wave-Induced Turbulence 

The phenomenon of "clear air 
turbulence" is of particular impor- 
tance to man's activities within the 
atmosphere because: (a) it is both a 
hindrance and hazard to aviation and 
(b) it accounts for roughly 20 to 30 
percent of the total dissipation of the 
atmosphere's energy. The latter fact, 

only recently discovered, relates di- 
rectly to our attempts to predict the 
global circulation weeks in advance. 
Without an adequate appraisal of this 
significant portion of the total energy 
budget, it will be impossible to model 
and predict the future state of the 

Dimensions of the Problem 

The name "clear air turbulence," 
or "CAT," has conventionally been 
used to refer to turbulence occurring 
several kilometers above the earth's 
surface and in air that is free of 
clouds and strong convective cur- 



rents. But our understanding of 
CAT processes, which reached a new 
climax only in 1969-70, strongly im- 
plies that identical mechanisms also 
occur within clouds and storm sys- 

Cumulative evidence is sufficiently 
persuasive to conclude that CAT 
occurs in internal fronts or layers in 
which the air is statically stable and 
across which there is strong shear of 
the wind either in speed or direction. 
Such conditions commonly prevail at 
both warm and cold fronts marked 
by clouds and precipitation. Increas- 
ingly abundant aircraft incidents, 
some of them fatal, also suggest that 
the CAT mechanism occurs at such 
frontal boundaries. The fact that 
the process has not been clearly 
identified as such is due to the general 
assumption by pilots and meteorolo- 
gists that turbulence within clouds 
or storms is more commonly due to 
convective- or thunderstorm-like ac- 
tivity. But this assumption is un- 
tenable when no direct meteorologi- 
cal evidence of convective activity 
exists and when aircraft undergo 
forced maneuvers that can only be 
associated with waves and breaking 
waves; the latter are now recognized 
to be the primary, if not the sole, 
origin of what we have previously 
called "CAT." 

Newly Recognized Features — Be- 
cause it now seems clear that severe 
turbulence of the nonconvective va- 
riety also occurs within clouds and 
storms, it is fallacious to continue the 
usage "clear air turbulence." And 
since turbulence in both clear air and 
clouds and storms owe their origin 
to breaking waves, it has been pro- 
posed that such turbulence be re- 
named "wave-induced turbulence," or 
"WIT." Unless we recognize these 
important facts, we shall fail to ap- 
preciate the full dimensions of the 
problem. For example, while CAT 
generally occurs at relatively high 
altitudes, thus allowing the pilot time 
to recover from a turbulence-caused 
upset, severe WIT within frontal 

storms may occur at very low alti- 
tudes without the possibility of safe 
recovery. Indeed, it is now reason- 
able to suppose that many previously 
unexplained fatal and near-fatal air- 
craft accidents owe their origin to 
WIT. Until this phenomenon is fully 
appreciated both by pilots and me- 
teorologists, aircraft will continue to 
encounter potentially fatal hazards 
without warning. 

Another deceptive aspect of the 
acronym "CAT" is its exclusion of 
wave-induced turbulence near cloud 
boundaries. This may have unfortu- 
nate consequences, since it is well 
known that cloud tops commonly oc- 
cur at the base of temperature inver- 
sions, and the latter, when marked by 
sufficiently strong wind shear, are the 
seat of wave-induced turbulence. In- 
deed, there is reason to believe that 
the presence of clouds below the in- 
version will enhance the chance of 
WIT above. This is because radiative 
and evaporative cooling from cloud 
tops induces convective overturning 
and this decreases the wind shear be- 
low the inversion while enhancing the 
shear in and above the inversion. 
Similar arguments suggest that cloud 
bases may also be preferred regions 
of WIT. 

Finally, recent radar observations 
(both ultra-high-resolution acoustic 
and microwave) of stable and break- 
ing waves indicate that WIT is an 
almost ubiquitous feature at the low- 
level nocturnal inversion and the ma- 
rine inversion. On occasion, there- 
fore, especially in association with the 
low-level nocturnal jet, we may ex- 
pect moderate to severe turbulence at 
low levels. These situations would be 
excluded from the present definition 
of CAT, which is restricted to turbu- 
lence at heights above the middle 
troposphere. Equally important, how- 
ever, is the recognition that WIT 
plays a role in the mixing processes 
at the top of the boundary layer. This 
may have significant consequences for 
the metamorphosis of the boundary 
layer, and thus upon air-pollution 

Hazards and Cost Implications — 
All this is by way of indicating that 
WIT is far more widespread than is 
presently recognized. The associated 
hazards are also greater; and the con- 
sequences, both in terms of basic 
atmospheric processes and of ultimate 
operational predictability, are more 
far-reaching. This is so simply be- 
cause our present classification of 
CAT excludes the many occurrences 
at low levels that might be confused 
with ordinary boundary-layer turbu- 
lence, and those within and near 
clouds and precipitation that are often 
misinterpreted as convectively pro- 
duced turbulence. 

Statistics on CAT occurrence, and 
its associated hazards and cost to 
aviation, must therefore be viewed as 
gross underestimates of the broader, 
but identical, phenomenon of WIT. 
Even so, the statistics compiled by 
the National Committee for Clear Air 
Turbulence indicate that damage to 
aircraft may have cost the Depart- 
ment of Defense $30 million from 
1963 to 1965, to say nothing of crew 
injuries or the effect of turbulence in 
reducing combat effectiveness. The 
committee reported a study that 
showed the cost to commercial avia- 
tion in 1964 to have exceeded $18 
million, of which a major portion was 
the increased expense caused by di- 
versions around areas in which turbu- 
lence was forecast or had occurred. 

The WIT Mechanism and Its 

Our knowledge of the WIT mecha- 
nism is substantial — at least com- 
pared to the state of knowledge be- 
fore 1968. A great deal more needs 
to be learned, however. Newly de- 
veloped observational tools promise 
major advances in understanding the 
WIT mechanism which should open 
the way to a more realistic appraisal 
of the climatology of WIT and the 
physical conditions under which it oc- 
curs. Together, the instruments and 
the increased understanding should 
lead to improved predictability, al- 



though some of our new knowledge 
implies clear-cut limitations in this 

The Origin of WIT — Classical 
theory concerns the rapid growth of 
perturbations on an internal front 
(inversion) in a fluid, called Kelvin- 
Helmholtz instability, which leads to 
large-amplitude Kelvin-Helmholtz 
(K-H) waves. The rolling-up of these 
waves under the action of wind shear, 
and their subsequent breaking, like 
ocean waves breaking on the shore, 
produces turbulence. 

The process may be described sim- 
ply, as follows: Suppose that we have 
two fluids of different density and 
that we arrange them in a stable 
stratification with the lighter one on 
top. Then we set the fluids in motion, 
with one of the two moving faster 
than the other, or in the direction 
opposite to the other. If the density 
change across the interface is strong 
enough and the shear is not too great, 
smaller perturbations will be damped 
out and the interface will come back 
to rest. But when the shear is strong 
relative to the density gradient, the 
situation is unstable and the pertur- 
bations will grow rapidly with time; 
vortices are created, as though a tum- 
bleweed were being rolled between 
two streams of air. 

The condition leading to unstable 
K-H waves and turbulence is that the 
ratio of buoyancy forces (working to 
damp vertical perturbations) to shear- 
ing forces (working to enhance them) 
should be less than 1. One-fourth of 
this ratio is the gradient Richardson 
number, Ri, which is defined as 





where g is the acceleration of gravity, 
is potential temperature, dO/dz is 
the vertical gradient of 6 (positive 
whenever the atmosphere is more 
stable than in the neutrally buoyant 
or adiabatic case), V is the horizontal 
wind velocity, and 3V/3z is the wind 

shear. A result obtained in 1931 that 
the critical Ri leading to K-H insta- 
bility is 1/4 has been confirmed. 
More precisely, Ri > 1/4 is sufficient 
for stability, and Ri ^ 1/4 is neces- 
sary, but not sufficient, for instability. 

The entire process has been dem- 
onstrated by Thorpe in laboratory 
fluid experiments and by Woods in 
thin, hydrostatically stable sheets in 
the summer thermocline of the Medi- 
terranean Sea. Both of these experi- 
ments show the development of beau- 
tifully formed billows, or K-H waves 
which roll up into vortices and finally 
break. And both demonstrate the gen- 
eral validity of the critical Ri sC 1/4. 

Evidence from tlie AtmospJiere — 
Ludlam has observed the existence of 
the K-H instability mechanism in the 
atmosphere by the presence of billow 
clouds, but only rarely are the com- 
bination of cloud and stability con- 
ditions just right to produce the 
lovely roll vortices in the clouds that 
are seen in the laboratory and the 
sea. The observation of their com- 
mon presence in the atmosphere has 
awaited the use of ultrasensitive ra- 
dars capable of detecting the weak 
perturbations in refractive index (due 
to temperature or humidity perturba- 
tions) which mark sharp inversions. 
Using three powerful radars at Wal- 
lops Island, Virginia, Atlas and his 
colleagues first reported the radar de- 
tection of clear air turbulence at the 
tropopause; Hicks, Angell, Hardy, 
and others have reported K-H waves 
and turbulence in clear air layers 
marked by static stability, large wind 
shear, and small Richardson number. 

Undoubtedly the most striking evi- 
dence of the K-H process as a cause 
of WIT, and of its common occur- 
rence at interval fronts, are the ob- 
servations made possible by the use of 
a unique new ultrasensitive FM-CW 
(Frequency Modulated Continuous 
Wave) microwave radar at the Naval 
Electronics Laboratory Center, San 
Diego. This radar is capable of one- 

meter vertical resolution, roiij; 
hundredfold increase over that pre- 
viously available with radars of com- 
parable sensitivity. With this new 
tool, it has been reported that K-H 
waves are a virtually ubiquitous fea- 
ture of the marine inversion over San 
Diego at altitudes up to about one 
kilometer. Indeed, the atmospheric 
K-H waves observed in this manner 
are commonly as beautiful in form as 
those produced in the laboratory and 
observed in the sea. (See Figure IV- 
10) It is worth noting that the unex- 
pectedly classical form of the waves, 
and their great frequency of occur- 
rence within the marine inversion, 
recommends the southwest coast of 
the United States as an atmospheric 
laboratory for studies of WIT. 

What the Data Show — The fact 
that the observed K-H waves are fre- 
quently restricted to exceedingly thin 
layers, sometimes only a few meters 
in depth, and rarely with amplitudes 
as large as 100 meters, explains why 
the previously available high-sensi- 
tivity radars of poor resolution could 
not identify them. In other words, 
the K-H wave structure was simply 
too small to be seen and the echoes 
appeared merely as thin, smooth lay- 
ers marking the base of the inversion. 

The new data also indicate that, 
though K-H wave activity may be in 
progress, the associated turbulence 
will not be intense unless the waves 
grow to large amplitude prior to 
breaking. This has been demon- 
strated by the erratic perturbations of 
the height of the radar-detected layer, 
indicative of moderate turbulence, 
which resulted from the breaking of 
K-H waves of 75-meter amplitude. 
In general, waves of significantly 
smaller amplitude appear not to pro- 
duce appreciable turbulence. 

Work now in progress shows that 
the turbulent kinetic energy following 
the breaking of the roll vortex of a 
K-H wave is directly proportional to 
the kinetic energy of the vortex im- 




Height (m) 


- 300 

- 200 



TIME (PDT) AUGUST 6, 1969 

(Illustration Courtesy of the American Geophysical Union ) 

Radar echoes from the clear atmosphere reveal a group of 
amplifying and breaking waves in the low-level temperature 
inversion at San Diego, California, as observed with a special 
FM-CW radar. Waves are triggered by the sharp change of 
wind speed across the interface between the cool, moist 
marine layer and the warmer, drier air aloft. They move 
through the radar beam at the speed of the wind at their 
mean height, about 4 knots, so that crests appear at succes- 
sive stages of development. In the second wave at 1919 PDT 
cooler air from the wave peak drops rapidly as the breaking 

begins. By 1929 PDT the layer has become fully turbulent, 
and the radar echo subsequently weakens. Note, too, the 
secondary waves near the crests at 1919.5, 1922, and 1926 
PDT; these secondary waves give rise to microscale turbu- 
lence, which causes the echo layers to be detected. The 
resulting turbulence would be weak, as detected by an air- 
craft. Waves of this type occur regularly in the low-level 
inversion, and are believed to be similar to those which cause 
the severe turbulence occasionally encountered by jet aircraft 
at high altitude. 

mediately prior to breaking. The 
r.m.s. velocity of a vortex, 

Vrms = 0.707 Aoj 

= 0.707 A(tV/(z) (2) 

where A is the amplitude of the roll 
or wave, to its angular rotation rate 
or vorticity, and cV/cz the wind 
shear, thus provides a simple estimate 
of the expected turbulence; prelimi- 
nary tests support this hypothesis. 
Moreover, it is of particular interest 
that the high-resolution radar data 
provide direct measures of A and its 
rate of growth as well as of 5V/?z, 
the shear. Similarly, the turbulence 
intensity may be deducted from the 
r.m.s. perturbations in the echo-layer 
height subsequent to breaking. (As 
yet, the inherent doppler capability of 
the FM-CW radar, which would pro- 
vide direct measurements of both 
vertical motion and roll vorticity, has 
not been implemented.) 

Unresolved Problems — If Equa- 
tion (2) is validated by experiments 
now in progress, we may contemplate 
the prediction of WIT from measure- 
ments and predictions of maximum 
wave amplitude and shear. But this 
assumes that we shall be able to pre- 
dict the latter. At this writing, the 
relationship of the maximum wave 
amplitude to the thermal and wind 
structure of the environment is not 
understood. Present K-H wave the- 
ory is limited to small-amplitude 
waves and their initial growth rates; 
clearly, the theory needs to be ex- 
tended to finite-amplitude waves. But 
rapid progress is more likely to come 
from experiments in the real atmos- 
phere, such as those already men- 
tioned, which involve somewhat more 
complex wind and temperature pro- 
files and interactions than are likely 
to be tractable in finite-amplitude 
theoretical models. 

In this regard, it should also be 
noted that the critical Richardson 
number, Ri,- < V-i, which might be 
regarded as a predictor of WIT, refers 
only to the initial growth stage of 
K-H instability. Since the high-reso- 
lution radar shows breaking K-H 
waves with amplitudes as small as 5 
meters (with negligible resulting tur- 
bulence) and as large as 100 meters 
(with appreciable turbulence), a seri- 
ous question is raised as to the verti- 
cal scales over which thermal stabil- 
ity and shear — and so Ri — need to be 
measured. Surely, the present data 
imply that Ri must be observed on 
scales of a meter or less to account 
for the small-amplitude waves. But 
it is not so clear that measurements 
with resolution of 10 to 100 meters 
or more, such as those available from 
present-day radiosondes, would be 
adequate to predict the occurrence of 
larger-amplitude waves. What, for 


CLEAR \ . , lENCE 

example, happens to a growing un- 
stable wave in a thin stratum when it 
reaches a dynamically stable layer in 
which Ri is significantly greater than 
Vi ? We do not know. This is one 
of many important questions that 
needs to be answered by further re- 

Other aspects of the new radar ob- 
servations that are relevant to flight 
safety as well as to aircraft investiga- 
tions of WIT and to its predictability, 
are: (a) the sharp vertical gradations 
in turbulence intensity (i.e., some- 
times the turbulence is restricted to a 
stratum no more than a few tens of 
meters thick) and (b) the inter- 
mittancy of K-H waves and turbu- 

It is not surprising that one air- 
craft experiences significant turbu- 
lence while the next one encounters 
none in the same region. While the 
radar observations demonstrate that 
the base of the inversion and subsidi- 
ary sheets within it are the seat of 
K-H wave activity, their breaking is 
self-destructive in that the shear and 
stability to which they owed their 
origin are decreased, and Ri thus in- 
creased above its critical level. Ac- 
cordingly, the breaking action acts as 
an escape valve to release the pressure 
for K-H activity, and turns the waves 
and turbulence off. On the other 
hand, the larger-scale atmospheric 
processes work to restore the initial 
conditions, and new K-H waves are 

All this speaks to the difficult ques- 
tions of aircraft experiments directed 
to observing the initial conditions for 
WIT, the energy budget involved, 
and, indeed, its entire life cycle. Pre- 
cisely where and when should the 
measurements be made and how are 
they to be interpreted in the light of 
WIT's great spatial and temporal vari- 
ability? Clearly, such experiments 
should preferably be conducted si- 
multaneously with a radar capable of 
"seeing" the waves and turbulence di- 

Prospects for Prediction — The prior 
discussion raises serious doubts as to 
the ultimate achievement of pinpoint 
forecasts of WIT in either space or 
time. While one may expect, eventu- 
ally, to be able to predict the medium- 
to large-scale processes that work to 
develop and sharpen internal fronts 
and shear, many presently unobserv- 
able small-scale phenomena (gravity 
waves, orographic lifting and tilting, 
convective motions, and such) will 
operate to reduce Ri to its critical 
value locally and trigger wave activity 
here and there. Accordingly, while 
we may expect significant improve- 
ments in the predictability of the 
heights of internal surfaces, and thus 
in the heights at which WIT is likely, 
and probably in the predicted in- 
tensity as well, the actual forecast will 
probably remain a probabilistic one 
for many years to come. We should 
therefore direct a good share of our 
attention to the remote-probing tools 
that are capable of detecting both the 
internal surfaces and the occurrence 
of waves and turbulence. As in the 
case of radar detection of thunder- 
storms, such observations are likely 
to provide the best short-term predic- 
tions of WIT for the foreseeable fu- 

Instrumentation for Detecting WIT 

Although we have spoken exten- 
sively of the capability of ultrasensi- 
tive high-resolution radar techniques 
in detecting WIT, a few additional re- 
marks need to be made concerning 
actual warning devices. 

Ground-Based Devices — High- 
resolution FM-CW microwave radar 
is an obvious candidate for this task. 
At present, however, it is limited to a 
detection range (based on over-all 
sensitivity in detecting clear air in- 
versions) of about 2 kilometers. An 
increase of range to 15 kilometers is 
attainable with available state-of-the- 
art components. This would accom- 
plish the detection of clear-air WIT 
throughout the depth of the tropo- 

sphere. A network of such st, 
across the nation, with fixed, ver- 
tically pointing antennas, is econom- 
ically feasible. Fortunately, the sig- 
nificant internal fronts at which WIT 
occurs are horizontally extensive, so 
that detection of waves and turbu- 
lence at one or more stations would 
indicate the layers affected and the 
likelihood of WIT at the same height 
(or interpolated height for sloping 
layers) in between stations. (Note that 
we emphasize the need for observa- 
tions with a high degree of vertical 
resolution, capable of detecting the 
suspect layers and measuring the 
amplitude and intensity of breaking 

Airborne Radar — With regard to 
the use of high-resolution FM-CW 
microwave radar on board aircraft for 
purposes of detecting and avoiding 
WIT along the flight path, the 15- 
kilometer range capability would be 
inadequate to provide sufficient warn- 
ing even if a high-gain antenna of the 
required dimensions (10' to 15' effec- 
tive diameter) could be accommodated 
in the aircraft. Moreover, since the 
vertical resolution in such a use-mode 
would correspond to that of the beam 
dimension rather than the available 
high-range resolution, the radar could 
not discern wave amplitude and 
heights with precision. However, the 
use of such a radar in both down- 
ward- and upward-looking directions 
(from large antennas fitted within the 
fuselage structure) does appear feas- 
ible. Clear-air WIT could then be 
avoided by detecting the heights of 
internal surfaces and K-H wave ac- 
tivity above and below flight level 
and assuming continuity of layer 
slope. Whether or not such a system 
should be adopted depends on cost/ 
benefit/risk ratios. The installation 
of a $100,000 radar seems warranted 
when aircraft carry more than 200 
passengers. Certainly, it should be 
adopted for experimental purposes in 
connection with WIT research. The 
potential benefits of airborne high- 
resolution radar to both military and 
commercial aviation could then be 
better evaluated. 



High-Resolution Acoustic Radar is 
another candidate for clear-air WIT 
detection from ground-based stations. 
Such radars have detected thin inter- 
nal surfaces and stable and breaking 
wave activity to heights of 2 kilo- 
meters. The potential to reach 15 
kilometers in the vertical direction 
can probably be realized, although the 
effect of strong winds aloft on the 
refraction of the acoustic beam re- 
mains an open question. Unfortu- 

nately, acoustic radar cannot be used 
on board fast-flying aircraft because 
of the slow speed of sound and the 
high acoustic noise levels. 

Future Teclmologx/ — Finally, a real 
hope still remains for the develop- 
ment of a coherent laser radar (or 
LIDAR) sufficiently sensitive to de- 
tect the small background concentra- 
tions of aerosols in the high tropos- 

phere and capable of measuring 
turbulence intensity through the dop- 
pler velocities. Although a theoretical 
feasibility study of such a device in 
1966 indicated that the then available 
LIDARs could not accomplish the 
task, more recent developments in 
laser technology may now make such 
a system feasible. The National Aero- 
nautics and Space Administration is 
presently conducting research and de- 
velopment along these lines. 

A Note on Acoustic Monitoring 

As is well known, the propagation 
of sound waves through the atmos- 
phere is strongly affected by wind, 
temperature, and humidity. The pos- 
sibility therefore exists that measure- 
ments of the propagation of sound 
waves could be used to derive infor- 
mation on important meteorological 

The potential of these methods has 
been analyzed and some experimental 
results published. It has shown that 
acoustic echoes can readily be ob- 
tained from the atmospheric turbu- 
lence and temperature inhomogenei- 
ties always existing in the boundary 
layer of the atmosphere. The equip- 
ment required is relatively simple; it 

involves a radar-like system in which 
pulses of acoustic signal, usually 
about 1kHz in frequency, are radi- 
ated from an acoustic antenna, with 
echoes from the atmospheric structure 
obtained on the same or on a second 
acoustic antenna. 

This field of acoustic echo-sounding 
of the atmosphere is very new and 
appears to hold considerable promise 
for studies of the boundary layer of 
the atmosphere — i.e., the lowest sev- 
eral thousand feet. Specifically, re- 
search is now being undertaken to 
identify its usefulness for the quanti- 
tative remote measurement of wind, 
turbulence, humidity, and tempera- 
ture inhomogeneity. If, as expected, 

the technique is shown capable of 
measuring the structure of the bound- 
ary layer and the vertical profiles of 
these meteorological parameters, it 
will represent a major breakthrough 
in remote measurement of the atmos- 
phere, which should be of great value 
to meteorological observations and 
research. Its primary application is 
likely to be in the monitoring of 
meteorological parameters in urban 
and suburban areas, for use by air- 
pollution and aviation agencies. In 
addition, it is already providing the 
research worker with totally new in- 
sight into the detailed structure and 
processes controlling the atmospheric 
boundary layer in which we live. 



Urbanization and Weather 

For centuries, man has speculated 
that major battles, incantations, large 
fires, and, lately, atomic explosions 
could affect weather, although he 
made no serious scientific attempts 
to modify weather until 25 years ago. 
Except for a few localized projects 
involving precipitation increases and 
fog dissipation, however, man's in- 
tentional efforts have yet to pro- 
duce significant, recognized changes. 
Rather, the major means whereby 
man has affected weather have been 
inadvertent — through his urban en- 

Growing Awareness of the Problem 

As long ago as 700 years or more, 
London had achieved a size great 
enough to produce a recognizable ef- 
fect on its local weather, at least in 
terms of reduced visibility and in- 
creased temperature. Since major ur- 
ban areas became prevalent in Europe 
following the Industrial Revolution, 
Europeans have directed considerable 
scientific attention to this problem 
of urban-induced weather change. 
Now that major urban-industrial com- 
plexes exist in many countries, world- 
wide attention has grown rapidly, 
particularly in the United States, 
where the growth of megalopolitan 
areas during the past ten to thirty 
years has brought with it increasing 
public and scientific awareness of the 
degree and, in some cases, the seri- 
ousness of urban effects on weather. 
Recent studies documenting signif- 
icant urban-related precipitation in- 
creases in and downwind of Chicago, 
St. Louis, and industrial complexes in 
the state of Washington have further 
focused scientific and public attention 
on the urban-weather topic and its 
considerable potential. 

Certainly, even the casual observer 
is aware that visibility is more fre- 
quently restricted in a major urban 
complex than in rural areas, and that 
this has come from smoke, other con- 
taminants, increased fog, and their 
additive, smog. Most Americans are 
now aware that the temperature with- 
in a medium-to-large city is generally 
higher at any given time of the day 
or season than it is in rural areas. 
This temperature effect has been rec- 
ognized and measured for many 
years, since its measurement, at least 
at the surface, is relatively easy. "Heat 
islands" for many cities of various 
sizes have been well documented. 

Urban areas also act as an obstacle 
to decrease winds near the surface, 
to increase turbulence and vertical 
motions above cities, and to create, 
occasionally, a localized rural-urban 
circulation pattern. There have been 
enough descriptive studies, further- 
more, to reveal that many other 
weather conditions are also being 
changed, often dramatically, by urban 
complexes. Although available re- 
sults indicate that urban-induced 
weather changes are restricted to the 
cities and their immediate downwind 
areas and have little effect on macro- 
scale weather conditions, the "urban 
flood" and advent of the megalopolis 
could conceivably lead to significant 
weather changes over large down- 
wind regions. 

Value Judgments — The question 
of desirability of the weather changes 
wrought by urbanization has only re- 
cently been considered. The fact that 
many of the urban-induced changes 
have occurred gradually has not only 
made them difficult to measure quan- 
titatively within the natural variabil- 
ity of weather, but has also made 

them less obvious and, therefore, un- 
wittingly accepted by the urban 
dweller. Now that urbanization is 
nearly universal, American citizens 
have suddenly become aware of 
many of the urban-induced weather 
changes. In general, such changes as 
increased contaminants, higher warm- 
season temperatures, lower winds, 
added fog, increased thunder and hail, 
added snowfall, and decreased visibil- 
ity are considered undesirable. Cer- 
tain urban-related weather changes 
are desirable, however, including 
warmer winters and additional rain- 
fall to cleanse the air and to add water 
in downwind agricultural areas. 

In summary, then, with respect to 
their effects on weather, urban areas 
sometimes act as volcanoes, deserts, 
or irregular forests; as such, they pro- 
duce a wide variety of weather 
changes, at least on a local scale, and 
these changes can be classed as bene- 
ficial or detrimental depending on the 
locale and the interests involved. 

Type and Amount of 
Weather Change 

The changes in weather wrought 
by urbanization include all major 
surface weather conditions. The list 
of elements or conditions affected in- 
cludes the contaminants in the air, 
solar radiation, temperature, visibil- 
ity, humidity, wind speed and direc- 
tion, cloudiness, precipitation, atmos- 
pheric electricity, severe weather, and 
certain mesoscale synoptic weather 
features (e.g., it has been noted that 
the forward motion of fronts is re- 
tarded by urban areas). (See Figure 

The degree of urban effect on any 
element will depend on the climate, 













+ 1000 



Solar Radiation 






+ 10 











+ 60 

+ 100 


Wind Speed 









+ 11 

+ 13 

+ 11 








+ 17 

The table summarizes changes in surface weather conditions attributable to urban- 
ization. Changes are expressed as percent of rural conditions. 

nearness to major water bodies, on 
topographic features, and city size 
and components of the industrial 
complex. Furthermore, the amount of 
effect on the weather at any given 
time depends greatly on the season, 
day of the week, and time of day. 
Thus, urban solar radiation is de- 
creased much more in winter than 
summer; is decreased on weekdays; 
and is decreased more in the morning 
than in the afternoon. Temperature 
increases resulting from the heating 
of urban structures are much greater 
in winter than in summer; hence, the 
average urban air temperature in win- 
ter is 10 percent higher than that in 
rural areas, whereas in summer it is 
only 2 percent higher. However, ur- 
ban temperatures during certain sea- 
sons and weather conditions can be 
as much as 35 percent higher or 5 
percent lower than nearby rural tem- 

It should be emphasized that op- 
posite types of changes in certain 
weather conditions are produced at 
different times. For example, fog is 
generally increased by urbanization, 
although certain types of fogs are ac- 
tually dissipated in large cities. Wind 
speeds are generally decreased, but 
they increase in some light wind con- 

ditions. Snowfall is generally in- 
creased by urban areas, but under 
certain conditions the city heat actu- 
ally melts the descending snow, trans- 
forming it into rain. 

Current Scientific Status 

Most studies of urban effects on 
weather have been descriptive and 
based on surface climatic data. Fur- 
thermore, only a few studies have at- 
tempted to investigate the causative 
factors and the physical processes in- 
volved in urban-produced weather 
changes. Without careful investiga- 
tions of the processes whereby urban 
conditions affect the weather, there is 
little hope for developing an adequate 
understanding and, hence, predictive 

Data Base — The present data base 
is woefully inadequate for studies 
of most urban-affected weather ele- 
ments. Two-dimensional spatial de- 
scriptions of urban effects on weather 
elements are now adequate only for 
temperature patterns. Data for 
weather changes in the vertical are 
totally inadequate for temperature as 
well as for all other weather elements. 

Descriptive types of urban-weather 
studies based on existing historical 
records tend to be seriously limited in 
their spatial information. For instance, 
studies of urban-rural fog differences 
have typically been based on surface 
values from a point in the central city 
and one at the airport; although these 
may indicate a 30 percent difference, 
they fail to describe the horizontal 
distribution of fog over the urban 
or rural environs. 

Unfortunately, adequate descrip- 
tions of the surface weather changes 
are not available for most metropoli- 
tan areas of the United States. Study 
of the urban-weather relationships in 
the United States has been much 
more limited than that in Europe be- 
cause the surface weather-station net- 
works in and around American cities 
have been too sparse. Information 
useful for such practical problems as 
city planning can be developed for 
major U.S. metropolitan centers only 
on the basis of thorough comparative 
studies of data from denser urban- 
rural surface networks than currently 
exist around most American cities. 

Instrumentation — Satisfactory 
tools to perform needed monitoring 
and study of urban-induced weather 
changes are available. Major advances 
in the development of airborne equip- 
ment to measure meteorological vari- 
ables and aerosols provide the poten- 
tial for obtaining the vertical data 
measurements needed to develop 
time-dependent, three-dimen- 
sional descriptions of the weather ele- 
ments around cities. Field studies of 
the airflow and vertical temperature 
distributions at Cincinnati and Fort 
Wayne, Indiana, have used these new 
instruments and techniques in pio- 
neering research. 

Theory and Modeling — The basic 
theoretical knowledge and formulas 
exist for understanding the atmos- 
pheric chemistry and physics in- 
volved in urban-weather relation- 
ships. Ultimately, studies of the 
urban factors that affect weather 
elements will provide the inputs 



needed to model the urban-weather 
system. However, this will require 
three-dimensional, mesoscale numeri- 
cal models (not currently available) 
and computers (soon to be available) 
with the capacity to handle them. 

Practical Implications of 
Urban-Induced Weather Change 

Regional Planning — The factors 
that produce undesirable weather 
changes clearly need to be assessed, 
and hopefully minimized, in planning 
and building new urban areas and 
redeveloping old ones. For instance, 
the ability of large urban-industrial 
complexes to produce thunderstorms, 
heavy rains, and hailstorms in and 
downwind of the complexes has par- 
ticular importance in hydrologic de- 
sign for urban storm drainage and 
in agricultural planning. 

Pollution — Knowledge of the 
urban-induced wind and rainfall 
changes apt to occur with various 
weather conditions is also required 
for determining whether these 
changes will materially affect pollu- 
tion levels. The generally expected 
decrease in winds and poorer ventila- 
tion are certainly undesirable, but ur- 
ban-increased rainfall is beneficial in 
this connection. Such knowledge 
would also help in improving local 
forecasting, thus enabling man to do 
better planning of his outdoor ac- 

Weather Modification — Study of 
the exact causes of various urban- 
produced weather changes can also 
be expected to help man in his efforts 
to modify weather intentionally. In 
particular, the study of the conditions 
whereby urban complexes affect pre- 
cipitation processes could generate 
needed information about the weather 
conditions appropriate for seeding, 
the types and concentrations of ef- 
fective seeding materials, and poten- 
tial rainfall changes expected beyond 
the areas of known urban-related 
increases. Continuing disagreements 
over evaluation of man-made changes 

and the types of physical techniques 
and chemical agents of modification 
reveal the need for proper study of 
these aspects during urban field in- 
vestigations and analyses. 

The economic aspects of this prob- 
lem are hard to assess but are surely 
significant. Reduced visibility, more 
fog, and added snowfall directly and 
indirectly restrict human activity. 
The damages to health, property, and 
crops resulting from added contami- 
nants, less sunshine, higher tempera- 
tures, and less ventilation can be 
serious. National economic losses at- 
tributable to urban-induced weather 
changes are inestimable. 

Requirements for Scientific 

The interactions of urban-produced 
weather changes with such matters as 
agriculture and hydrology, and with 
ecology, are only partly understood, 
since the inadvertent aberrations are 
frequently within the limits of natural 
variability of weather. For instance, 
the increase in crop yields resulting 
from urban-increased rainfall could 
be easily and accurately assessed, 
whereas the effect on crop yields of 
increased deposition of urban con- 
taminants into soils cannot currently 
be assessed without special studies. 
Our knowledge and understanding of 
the interactions of weather changes 
with man and society are almost 
totally lacking. The legal and social 
ramifications are barely understood, 
although the threats of damage to 
property, crops, health, and safety 
from such changes as increased con- 
taminants, more fog, less sunshine, 
and higher temperatures are now 
clear. Certainly, the responses to 
inadvertent weather changes provide 
an opportunity to study and assess 
potential human reaction to planned 
weather modification. The only 
means of fully assessing the urban- 
modification effect of each weather 
element in a given locale, however, 
is to measure all elements in three 

Adequate measurement and under- 
standing of the interactions between 
urban factors and atmospheric con- 
ditions that produce, for example, a 
10 percent rainfall increase in one 
urban complex should lead to rea- 
sonably accurate predictions of the 
precipitation changes in most com- 
parable cities where routine measure- 
ments of the urban factors exist or 
could easily be performed. Indeed, 
major projects to study the urban 
conditions that change weather ele- 
ments are sorely needed at several 
cities, each of which should be repre- 
sentative of basically different North 
American climates and urban com- 
plexes so that the results could be 
extrapolated to other cities. A min- 
imum national effort would consist 
of a thorough field project in one 
city that is representative in size and 
climate of several others. 

Such a project would be more 
meaningful if relevant interdiscipli- 
nary projects involving the physical 
and social sciences were conducted 

To achieve meaningful, three- 
dimensional measurements of weather 
and urban conditions will require 
marshalling of instrumentation and 
scientific effort to create dense net- 
works of surface instruments heavily 
supplemented by vertical measure- 
ments obtained by aircraft, balloons, 
and remote probing devices. The 
scientific skills, personnel, and fa- 
cilities necessary to explain and pre- 
dict most facets of this topic exist, 
but they have yet to be focused on 
it. Answers exist in relation to sev- 
eral basic questions concerning the 
urban-weather topic, but more con- 
centrated study is needed in the next 
five years. No serious effort has 
been made to describe the interac- 
tion between urban-induced weather 
changes and man, and this, too, is 
urgently needed. If performed, these 
studies should provide information 
adequate to modify some of the un- 
desirable weather changes within ten 



The Influence of Urban Growth on Local and Mesoscale Weather 

The fact that large human settle- 
ments change the atmospheric con- 
ditions in their immediate vicinity 
has been recognized for over a cen- 
tury. Up to very recently, however, 
it was considered that these influences 
were strictly local in character. Anal- 
ysis in depth has shown that this 
may not be the case at all and that 
urban influences on the atmosphere 
may well reach considerably beyond 
the urban confines. 

The causes for effects of towns on 
weather and climate are easily traced. 
First, human activities, especially 
combustion processes, produce heat. 
In some cities in northern latitudes 
during the winter this added energy 
may be a sizable fraction of the 
solar energy impinging on the same 
area. In recent years, airconditioning 
has also been adding heat to the air 
in summer by dumping the excessive 
indoor heat into the surrounding at- 

The energy balance is further al- 
tered because urban surfaces replace 
vegetation of low heat capacity and 
heat conductivity with stony surfaces 
of high heat capacity and heat con- 
ductivity. These same urban sur- 
faces also alter the water balance. 
Rain runs off rapidly, diminishing 
the natural system of evaporation 
and evapotranspiration, not only fur- 
ther altering the energy balance by 
reducing evaporative cooling but also 
throwing great burdens on drainage 
and runoff systems at times of intense 

Compact areas of buildings and 
dwellings also alter the natural air 
flow. They create considerable aero- 
dynamic roughness. This may cause 
changes in the low-level wind profiles 
up to several thousand feet in the 

Most important, probably, is the 
effect of cities on atmospheric com- 

position, not only locally but even 
for many dozens, if not hundreds, of 
miles downwind. Literally hundreds 
of different chemical compounds from 
industrial and combustion processes 
are blown into the atmosphere. The 
blind faith of the past trusted that 
friendly air currents would dilute and 
dispose of them harmlessly. Yet 
many of these admixtures have be- 
come semi-permanent residents of the 
atmosphere, where they undergo fur- 
ther chemical change through the im- 
pact of solar radiation and by inter- 
action with the water vapor in the 

Meteorological Changes and 
their Consequences 

Many of the meteorological altera- 
tions in urban areas have been quan- 
titatively assessed. Most of them are 
universally agreed to. In enumerat- 
ing them we proceed from the sim- 
pler to the more complex and, almost 
in parallel, from the noncontrover- 
sial to the controversial aspects of the 

The Water Balance — It is per- 
fectly obvious that, by replacing the 
naturally spongy vegetative surface 
with impervious roofs, parking lots, 
and streets, any falling rain will 
quickly run off. Indeed, urban drain- 
age systems are designed to carry 
the waters rapidly into streams and 
rivers. The consequence is that flood 
waters may gather more rapidly and, 
in case of excessive rainfalls, not only 
increase crests but also cause rapid 
flooding of low-lying districts in ur- 
ban areas. The lag time of flood 
runoff may be cut in half by the 
impervious areas. 

Heat Islands — The excess energy 
production of a city and its altered 
heat balance, because of changes in 
albedo and heat characteristics of 
the man-made surface, creates one 
of the most notable atmospheric 

changes in urban areas. It has been 
given the very descriptive label "heat 
island." This term designates a tem- 
perature excess that covers the urban 
area. It is most pronounced in the 
sectors of highest building and popu- 
lation concentrations; on calm, clear 
nights it can reach or even exceed 
10 Farenheit compared with rural 
surroundings. (See Figure IV-12) Re- 
cent experiments have shown that a 
single block of buildings will produce 
a measurable heat-island effect. At 
the same time, the reduced evapora- 
tion caused by rapid runoff and re- 
duced vegetation as well as this tem- 
perature increase reduces the relative 
humidity at the surface. 

Wind Circulation — The previously 
mentioned increase in surface rough- 
ness causes decreased wind speed at 
the surface. The heat island also 
induces wind convergence toward the 
urban area. In daytime, the highly 
overheated roof and black-top sur- 
faces create convective updrafts, es- 
pecially in summer. The updrafts 
induce a higher degree of cloudiness 
over the city and contribute to the 
release of showers over the city. 
At night, inversions of temperature 
form over the rural and suburban 
areas while temperature-lapse con- 
ditions continue in a shallow layer 
over the city core. This temperature 
distribution induces a closed circu- 
lation system within a metropolitan 
area, which in turn contributes to 
concentrations rather than dispersion 
of pollutants when the general wind 
circulation is weak. 

Solar Radiation — Pollutants act in 
an important way on the incoming 
solar radiation. The aerosol absorbs 
and scatters the solar radiation, af- 
fecting principally the shorter wave- 
lengths. This means that the long- 
wave ultraviolet radiation is radically 
weakened and its possible beneficial 
effects as killer of germs and activa- 
tor of vitamin D in the human skin 




(Illustration Courtesy of the American Meteorological Society) 

The figure shows the isotherm pattern for 2320 PST on 4 April 1952 superimposed 
on an aerial photograph of San Francisco. The relation between the air tempera- 
ture measured 2 meters above the surface and urban development is evident. A 
temperature difference of 20°F. was observed on that calm, clear night between the 
densely built-up business district (foreground) and Golden Gate Park (left rear). 

are reduced or eliminated. At the 
same time, these actinic rays cause 
a large number of photochemical 
reactions in the welter of pollutants. 
Many of them lead to obnoxious sec- 
ondary products such as ozone, which 
irritates mucous membranes, and 
other equally undesirable products. 
They cause notable reduction in vis- 
ibility, which is not only aesthetically 
objectionable but often detrimental 
to aviation. Increased haze and fog 
frequency, compared with the natural 
environment, is a man-made effect, 
a fact that becomes impressive be- 
cause it is demonstrably reversible. 
In some cities (e.g., London) where 
the number of foggy days had grad- 
ually increased over the decades, a 

determined clean-up of domestic fuels 
and improved heating practices led 
to immediate reduction in the fog 

Precipitation — Much less certainty 
exists about both the local and more 
distant effects of city-created pol- 
lutants on precipitation. The already 
mentioned increased shower activity 
in summer has probably little or 
nothing to do with the pollutants. 
It is primarily a heat effect, with 
water-vapor release from combustion 
processes perhaps also playing a role. 
But we do have a few well-docu- 
mented wintertime cases when iso- 
lated snowfalls over major cities were 
obviously induced by active freezing 

nuclei, presumably produced by some 
industrial process. There is no in- 
contestable evidence that over-all 
winter precipitation over urban areas 
has increased, but most analyses 
agree that total precipitation over 
cities is about 5 percent to, at most, 
10 percent greater than over rural 
environs, even if all possible oro- 
graphic effects are excluded. More 
spectacular increases observed in the 
neighborhood of some major indus- 
trial-pollution sources are probably 
the effect of sampling errors inherent 
in the common, inadequate rain- 
gauge measuring techniques. 

Even so, there is major concern 
about the very possible, if not al- 



ready probable, effects of city-pro- 
duced pollutants on precipitation 
processes. One of them can be 
caused by the high emission rates 
of minute lead particles from the 
tetraethyl-lead additive to gasoline. 
Some of these particles combine with 
iodine brought into the atmosphere 
primarily from oceanic sources to 
form lead-iodide. This compound 
has been shown to form very efficient 
and active freezing nuclei, which can 
trigger precipitation processes in only 
slightly sub-cooled cloud droplets. 
The lead particles are so small that 
they will stay in suspension for long 
distances and thus trigger precipita- 
tion at places far removed from the 
sources of the lead. Even more 
ominous could be the swamping of 
the atmosphere by condensation nu- 
clei. These are produced in urban 
areas in prodigious amounts in con- 
centrations surely two orders of mag- 
nitude higher than in uncontaminated 
air. There are literally hundreds of 
thousands of these nuclei in a cubic 
centimeter, and even the most hygro- 
scopic of them competes for the 
available moisture in the air. The 
more nuclei there are, the more likely 
it is that the cloud droplets that form 
will be very small because of the 
large number of competing centers 
around which condensation occurs. 
Small cloud droplets have more dif- 
ficulty in coalescing and forming rain 
than large droplets. Hence it is quite 
possible, although not proven beyond 
doubt, that in some urban areas or 
downwind from them a decrease in 
rainfall could occur. This is one of 
the effects requiring careful watch in 
future research. 

Atmosplieric Stagnation — When 
weather conditions favor slight winds 
and surface temperature inversions, 

air layers in metropolitan areas be- 
come veritable poison traps. These 
can lead to the well-known health- 
endangering pollution episodes. With 
a number of metropolitan areas in 
close proximity, a slight ventilation 
will waft pollutants into the next 
series of settlements within a few 
hours or days and aggravate the situ- 
ation there. This type of accumula- 
tion has not been adequately investi- 
gated either. But the whole area of 
the United States east of the Ap- 
palachians from northern Virginia to 
southern Maine may be affected by 
cumulative pollution effects. There 
are also other megalopolitan areas in 
the country that may need similar 
attention. Computer simulation of 
such atmospheric-stagnation periods 
has made some progress but is still 
severely restricted by the inadequacy 
of the mathematical models and the 
lack of sufficient actual observations. 

Many of the micrometeorological 
alterations brought about by urbani- 
zation have been well documented in 
a number of cities. They have re- 
cently been followed, stey by step, in 
a rural area that is in the process of 
becoming urbanized — the new town 
of Columbia, Maryland, where popu- 
lation density has increased from a 
few hundred to a few thousand in- 
habitants and will increase to a hun- 
dred thousand in the current decade. 
Many of the characteristic changes 
in temperature, wind, humidity, and 
runoff are already observable. This 
continuing study in a planned, grow- 
ing community may greatly further 
our knowledge of the micrometeoro- 
logical changes. 

Implications for Town Planning 

It is proper to ask whether we can 

turn this knowledge to use in future 
town planning and redevelopment of 
older cities. The answer is affirma- 
tive. Natural environments charac- 
teristically have a varied mosaic of 
microclimatic conditions, most of 
which are destroyed by urbanization. 
The detrimental effects are primarily 
introduced by compact construction 
with few interruptions, creating an 
essentially new surface of roofs at 
which energy interactions take place. 
In many urban areas, vegetation has 
been sharply diminished or even com- 
pletely eliminated. Reversal of this 
trend will bring about a desirably 
diversified pattern of microclimate. 
Two tall buildings with large green 
and park areas surrounding them are 
far preferable to the typical row 
house or walk-up slum configura- 
tion. The open construction charac- 
teristic of suburban areas has caused 
little climatic deterioration of the 

Air pollution will remain a prob- 
lem. There is some merit in using 
tall stacks for the effluents from 
stationary sources. Appropriate loca- 
tion, predicated on the general re- 
gional airflow patterns, is indicated 
for industrial sources of pollutants. 
There is little substantive knowledge 
on possible amelioration of pollutants 
from mobile sources through highway 
routing, construction, elevation, or 
other engineering techniques. Con- 
trol at the source seems to offer the 
only tenable solution over the long 

Too little is yet known about the 
sinks of pollutants in urban areas, 
although shrubbery and insensitive 
plants seem to offer some help by 
intercepting particulates. 

Urban Effects on Weather — the Larger Scales 

The possibility that human activi- 
ties might be modifying large-scale 
weather patterns or even the global 

climate has received much publicity. 
The present state of atmospheric sci- 
ence does not allow either firm sup- 

port or confident refutation of any 
of the effects which have been pos- 



There is no doubt that cities modify 
their own weather by the local pro- 
duction of heat and addition of ma- 
terial to the atmosphere. "Material" 
includes water vapor (H_0) and car- 
bon dioxide (COu) as well as the 
gases and particulates commonly 
classed as pollutants. City tempera- 
tures exceed those of similarly ex- 
posed rural areas, particularly at 
night, but the most noticeable change 
is in the solar radiation reaching the 
ground, which is typically about 10 
percent below that of upwind sur- 
roundings. In considering the extent 
to which effects on weather may 
overstep the city boundaries, it is 
convenient to look at three scales — 
local, regional, and global. "Local" 
refers to effects downwind of the 
city at distances up to about 100 
miles; "regional" to subcontinental 
areas of the order of 1,000 square 
miles; and "global" to the whole 

Local Effects 

Local effects include deterioration 
of visibility and reduction of solar 
radiation, which are not in ques- 
tion. At 100 miles distance, in New 
England, one knows when the wind 
is blowing from New York City. This 
does not, in general, have repercus- 
sions on the other weather factors 
that are large enough to be estab- 
lished by examining weather records. 
If there are such effects they are 
small and probably lost in the general 
variance, although no very sophisti- 
cated search has been made — for 
example, among satellite cloud pic- 
tures — to verify that speculation. 

In two or three instances, it has 
been claimed that an increase of 
precipitation downwind of cities has 
been established. The best known 
example is at La Porte, Indiana, where 
an apparent considerable excess of 
precipitation over surrounding areas 
has been associated with industrial 
activity (particularly steel mills) in 
the Chicago and Gary, Indiana, areas. 
This seemed to be a clear-cut case, 

but the skill and/or objectivity of 
the one observer whose record estab- 
lished the effect has recently been 
questioned (with supporting evidence) 
by other climatologists. In the other 
cases that have been discussed, in- 
cluding recent claims of an increase 
of shower activity downwind of pulp 
plants in Washington state, the statis- 
tical evidence offered in support of 
the hypothesis of modification is less 
convincing than that for La Porte. 
Physically, there is doubt whether 
any precipitation increase that might 
occur would be an effect of cloud 
seeding by particulate pollutants or 
of the increased triggering of convec- 
tion by the heat and moisture sources 
of the city. The latter explanation is 
gaining favor. 

Regional Effects 

On the regional scale there is gen- 
eral agreement that atmospheric tur- 
bidity — a measure of the extinction 
of solar radiation — has increased 
over the past fifty years in western 
Europe and eastern North America, 
even in locations as remote from 
cities as can be found in these areas. 
Again, there is no indication that 
the reduction in solar radiation reach- 
ing the ground has had any effect 
on other weather elements. Such 
connections are extremely difficult to 
establish, for reasons which will be 
discussed later when we consider 
global effects. 

There is, however, one possible 
regional effect of pollution that is 
causing international concern, though 
it would not traditionally be con- 
sidered a "weather" phenomenon. 
This is the deposition in precipitation 
of pollutants transported hundreds 
of miles from their source, perhaps 
across international boundaries. The 
best-known case is the occurrence in 
Scandinavia of rainfall with an un- 
usual degree of acidity which has 
been attributed to the transport of 
pollutants emitted in Britain and Ger- 
many. A similar geographical and 

meteorological situation exists in the 
northeastern United States, where the 
situation might repay investigation. 
Persistently acidic rain or snow might 
have long-term effects on forest ecol- 
ogy and lead to reduced productivity 
in forest industries. The connection 
between the observation and its pre- 
sumed cause is simply the fact that 
no other explanation has been con- 
ceived. Statistical or physical links 
have not been demonstrated — in- 
deed, our current ignorance in the 
fields of atmospheric chemistry and 
microphysics precludes a convincing 
physical link. This is potentially one 
of the most serious of the currently 
unsolved scientific environmental 

Global Effects 

The possible modification of cli- 
mate by industrial effluents has been 
under serious scientific discussion for 
more than thirty years and the ex- 
tent, nature, and intractability of the 
underlying problems is now becom- 
ing evident. It was postulated in the 
1930's, and it is now clearly estab- 
lished, that the atmospheric COn con- 
tent is increasing as a result of com- 
bustion of fossil fuel. Radiative 
transfer calculations indicate that if 
the CO- content increases, and noth- 
ing else changes, temperature at the 
earth's surface will increase. No one 
ever seriously suggested that "noth- 
ing else changes," but it was noted 
that during the first forty years of 
this century recorded surface tem- 
peratures did increase. The connec- 
tion with CO:; increase was noted and 
extrapolated. There were prophecies 
of deluge following melting of the 
polar ice. However, by 1960 it was 
clear that surface temperatures were 
falling, and at the same time the 
continent-wide increase in turbidity 
was noted. (A global increase cannot 
be established because a network of 
suitable observations does not exist.) 
The obvious connection, on the hy- 
pothesis that solar radiation at the 
surface had decreased and nothing 



else had changed, was made. The 
extrapolators moved in, and there 
were prophecies of ice ages. 

Statistical and physical explana- 
tion of the problem of climatic change 
can be conceived, but each approach 
has fundamental difficulties. In the 
lirst case, existing series of climatic 
statistics based on instrumental read- 
ings are short — about 250 years is 
the longest reliable record. The stat- 
istics of these long series are not 
stationary; there is variance at the 
longest periods they cover. Historical 
and geological evidence indicates 
greater fluctuations of climate than 
the instrumental record. Statistics in- 
dicative of climate are not stationary. 
There can be no test of significance 
to separate climatic change that might 
be associated with man's activities 
from the "natural" changes associated 

perhaps with internal instabilities of 
the ocean-atmosphere system or per- 
haps with extraterrestrial change. 
The physical approach leads to sim- 
ilar conclusions, as Lorenz, in par- 
ticular, has pointed out. The equa- 
tions governing the ocean-atmosphere 
system certainly have innumerable 
solutions, and it may be that sets of 
these solutions exist with very dif- 
ferent statistics — i.e., that the earth 
may have many possible climates 
with its present atmospheric composi- 
tion and external forcing function. 
At most, changing the composition 
of the atmosphere (e.g., by adding 
CO2) might change the nature of 
the inevitable change of climate. 

Indicated Future Research 

Present activity is in two direc- 
tions — confirming and extending our 

knowledge of the changes of atmos- 
pheric composition due to industrial 
activity by monitoring programs, 
and developing physical models of 
the climate. This latter is one of the 
major scientific problems of the age, 
and we do not yet know whether it 
can be resolved to any useful extent. 
The requirement is for a model, sim- 
ilar to existing models of the global 
atmosphere and ocean but completely 
independent of any feature of the 
present climate. The most complex 
existing models incorporate a forcing 
function specified in terms of the cur- 
rent climatological cloud distribution 
and ocean surface temperature. The 
output of these models can, therefore, 
at best only be consistent with the 
existing climate. The requirement is 
for a model which will generate its 
own cloud distribution and ocean- 
surface temperatures. 





The Origin of Atlantic Hurricanes 

Atlantic hurricanes are uncommon 
events by comparison with the fre- 
quency of the storms that parade 
across the temperate latitudes of the 
United States each month. In terms 
of their deadly, destructive potential, 
however, they are, individually, the 
greatest storms on earth and the 
most important natural phenomena 
to affect the welfare of the United 
States. A single event may visit more 
than a billion dollars of damage and 
result in hundreds of lives lost, 
mainly due to drownings. In addition 
to carrying sustained winds that 
sometimes exceed 150 miles per hour 
and hurricane tides that may rise 
more than 20 feet above mean sea 
level, this massive atmospheric storm 
often carries with it families of tor- 
nadoes running ahead of the highest 
tides and strongest winds. For ex- 
ample, in Hurricane Beulah, which 
moved into the lower Texas coast 
in 1967, a total of 49 verified tor- 
nadoes was reported. 

The quest for a better understand- 
ing of the hurricane, for means of 
increasing the accuracy of forecasts, 
and, ultimately, for reducing the ex- 
tent of hazard has focused attention 
on the source regions of the seedling 
disturbances from which hurricanes 
grow, and on the environmental 
structure of the equatorial trough and 
the trade winds which control the 
forces promoting hurricane develop- 
ment. This quest has been greatly 
assisted by a new tool of observa- 
tion, the meteorological satellite, 
which maintains a continuous global 
surveillance of cloud groups produced 
by disturbances and storm systems. 

Surveillance and Prediction of 
Hurricane Seedlings 

On average, more than 100 hur- 
ricane seedlings move westward 

across the tropical Atlantic during 
the course of a hurricane season, 
June through November. These seed- 
lings are initially benign rain storms 
which move westward in the flow of 
the trade winds. Less than one-fourth 
of the seedlings develop circulation 
eddies with discrete centers of low 
pressure, and an average of only 10 
per year intensify enough to sustain 
gale-force winds and earn a girl's 
name as a tropical storm. On aver- 
age, 6 of the 10 tropical storms reach 
hurricane intensity at some stage in 
their lifetime and 2 of these cross 
the U.S. coastline. 

For many years, meteorologists 
have known that some hurricanes 
seem to have their origin near the 
west coast of Africa. Not until the 
meteorological satellite provided daily 
information on storm systems around 
the globe, however, was it apparent 
not only that hurricane seedlings 
could be traced back to the African 
coastline in many instances, but also 
that they seemed to stem from a 
source region near the Abyssinian 
Plateau of eastern Africa. They 
march in great numbers across arid 
portions of Africa before reaching the 
Atlantic Ocean, where they begin ab- 
sorbing the moisture necessary to 
drive a vigorous storm system. 

A census of the hurricane seedlings 
that occurred in 1968 is presented in 
Figure V-l, which diagrams the 
sources and movement of disturb- 
ances and their evolution into tropi- 
cal storms. The parade of disturb- 
ances, mainly from Africa westward 
across the Atlantic and Caribbean, 
extends across Central America into 
the eastern Pacific. Approximately 
three-fourths of the eastern Pacific 
storms are spawned by seedlings 
whose origin is on the Atlantic side 
of Central America. 

It is noteworthy, however, that 
not all hurricanes form from seed- 
lings which had sources in Africa. 
Indeed, not all hurricanes form in the 
tropics. Almost every year one or 
two hurricanes develop from tem- 
perate-latitude systems. Typically, the 
trailing edge of an old worn-out cold 
front, while losing its temperature 
contrast, acquires a rich influx of 
moisture from the tropics. The proc- 
ess causes a circular storm to form 
and to develop the structural char- 
acter of the hurricane. Since this 
process frequently takes place in close 
proximity to a U.S. coastline, it poses 
a particularly challenging warning 

Surveillance — The surveillance of 
hurricane seedlings and of hybrid 
disturbances that may become hurri- 
canes is done mainly by satellite 
cloud photography. Figure V-2, for 
example, shows a series of hurricane 
seedlings in the tropical Atlantic and 
a hurricane that is lashing the Texas 
coast — in this case, Hurricane Beu- 
lah, September 18, 1967. Two tropi- 
cal storms are also visible in the 
eastern Pacific Ocean. In such photo- 
graphs, the satellite looks down es- 
sentially on the exhaust product of 
the heat engine that generates the 

At present, inferences about the 
efficiency of the engine and the en- 
ergy that is being released must be 
drawn empirically and indirectly. 
However, second-generation satellites, 
and techniques for analyzing the 
movement of cloud segments from 
successive pictures, will soon provide 
more direct means of assessing the 
changes in horsepower that the heat 
engine develops. 

The tracking and prediction of hur- 
ricanes cannot be done with meteoro- 




The diagram shows areas of formation and decay of hurricane seedlings during 
1968. Although the African continent appears to be important in the development 
of seedlings, some form in other parts of the Atlantic and the Caribbean. A hurri- 
cane may develop from any of the seedlings. Surveillance and tracking is much 
easier with satellites, but the question of why one seedling develops into a hurri- 
cane and another does not remains unanswered. 


Figure V-2 — HURRICANE BEULAH, 1967 


This cloud mosaic from September 18, 1967, shows Hurricane Beulah before it 
struck the Texas coast. The mosaic was compiled from pictures taken on eleven 
successive passes by the polar orbiting satellite, ESSA-3. Polar orbiting satellites 
pass over a given area twice per day, once during daylight hours and once at night. 

logical satellites alone. Judicious de- 
ployment of aircraft reconnaissance 
is also required to probe the storm 
center directly. The delicate balance 
of forces that usually exists within 
a hurricane and determines its destiny 
can be measured only by direct sens- 
ing, and the only practicable tool in 
sight for this purpose is the recon- 
naissance aircraft. 

Numerical Modeling — The prob- 
lem of modeling numerically the 
movement and development of hur- 
ricane seedlings, and especially the 
movement of full-blown hurricanes, 
is more complicated than that of 
modeling temperate-latitude frontal 
storms. The large-scale temperate- 
latitude storm derives its energy 
mainly from the sinking of large 
amounts of cold air, a process that 
can be described in terms of tem- 
perature contrasts on a scale of many 
hundreds of miles. The tropical 
storm, in contrast, develops in an 
environment where lateral tempera- 
ture constrasts are absent. 



The release of energy in a devel- 
oping tropical storm involves a num- 
ber of links in a chain of actions, 
each of which must unfold in a timely 
and effective manner if the storm is 
to develop. First, the environment 
must be structured to support the 
spin that tries to develop locally in 
the wind circulation when pressure 
first begins to fall. Second, the en- 
vironmental winds must be able to 
distribute systematically the heat re- 
leased by the large cumulus clouds 
that spring up near the area of max- 
imum spin. It is the systematic dis- 
tribution of this heat, not its release 
per se, which generates fresh kinetic 
energy for intensification of the storm 

As the tropical storm intensifies 
further and approaches hurricane 
force, the system depends uniquely 
on a continuous flow of heat energy 
from the sea to the air. These proc- 
esses involve a subtle interaction be- 
tween the scales of motion charac- 
teristic of temperate-latitude storms 
and those characteristic of cumulus 
clouds only a few miles in diameter. 
This interaction is difficult to model, 
as is the flow of heat energy from the 
sea to the air. The primary purpose 
of project BOMEX (Barbados Ocean- 
ographic and Meteorological Experi- 
ment) conducted from May through 
July of 1969, was to gain better 
understanding of the exchange proc- 
esses across the ocean/atmosphere 

The modeling problem, especially 
in connection with the tracking of 
undeveloped disturbances, is further 
complicated by the fact that in the 
tropics there is essentially a two- 
layer atmosphere, with disturbances 
in the lower layer sometimes travel- 
ing in a direction opposite to those 
in the upper layer. 

Because of all these complications, 
no model yet exists that can predict 
in real-time the moment and develop- 
ment of hurricane seedlings. A num- 

ber of diagnostic models have been 
produced which seem to simulate, in 
a research environment, many of the 
physical processes that occur during 
this development and that charac- 
terize the behavior of the full-grown 
hurricane. However, forecasting pro- 
cedures for tropical disturbances and 
storm systems still depend primarily 
on the identification, description, 
tracking, and extrapolation of the 
observed movement of the system. 

Present-Day Techniques — Fortu- 
nately, the digital computer provides 
the forecaster with rapid data- 
processing which enables him to as- 
sess the immediate behavior of storm 
systems and how this may reflect 
on the future movement and devel- 
opment potential. Because of the in- 
creasing use of machines for data- 
processing, it is now possible to make 
more extensive use of analogues to 
compare the present storm system 
with similar systems from historical 
records and thereby compute the 

probable movement and intensifica- 
tion to be expected. 

Figure V-3 is an example of one 
such method developed during 1969 
at the National Hurricane Center. 
In this case, the computer is required 
to search historical records for all 
storms that were similarly located 
and whose characteristics were com- 
parable to the storm system for which 
a forecast must be made. From the 
historical record, a most-probable 
track for periods up to 72 hours is 
determined and a family of prob- 
ability ellipses is computed show- 
ing expected deviations from the 
most-probable track (50% and 25% 
probability areas). This family of 
ellipses is used to identify the area 
of the coastline to be alerted initially 
to the threat of a hurricane. 

Other more sophisticated tools, us- 
ing statistical screening techniques, 
are also used by the forecaster to 
guide his judgment in predicting hur- 
ricane movement. 


100W 95W 90W 85W 80W 75W 


35 N 


25 N 


In this relatively crude warning technique, a computer searches historical data to 
find a hurricane situation with similar characteristics to the one under observation. 
It then prognosticates future positions for 12, 24, 36, 48, and 72 hours, as shown 
in the figure, based on the history of the earlier hurricane. The size of the proba- 
bility ellipses indicates the magnitude of error that is involved in the use of this 



Development of still more sophis- 
ticated prediction models depends on 
a better means of observing the in- 
teractions between large and small 
scales of motion. The major emphasis 
of the Global Atmospheric Research 
Program's first tropical experiment, 
scheduled for the Atlantic Ocean in 
1974, will be to describe and under- 
stand cloud clusters. The results of 
this investigation should provide 
valuable guidance in modeling the 
interaction between meso- and syn- 
optic-scale motions. For the imme- 
diate future, however, the emphasis 
will probably have to remain on 
development of numerical methods 
that will minimize errors in predict- 
ing tropical disturbances and storms. 
Unless vast resources are devoted to 
the problem, sophisticated prediction 
models are not apt to become avail- 
able in less than five to eight years, 
if then. 

The median error in predicting the 
landfall of hurricanes along a U.S. 
coastline continues to decrease slowly, 
although it varies from year to year. 
This progress is due not so much to 
advances in modeling hurricanes 
numerically as it is to the availability 
of better facilities to track and ob- 
serve disturbances at each stage of 
development and of modern tech- 
nology that provides rapid processing 
of data from the storm area and 
environment. These facilities permit 
us to apply diagnostic tools of rea- 
soning in an objective fashion, though 
we have only scratched the surface 
in the development of such tools. 
Apart from any progress that might 
be made in modeling the behavior of 
hurricanes, there is good reason to 
estimate that the median error for 
predicting hurricane movement near 
our coastlines, now about 110 nauti- 
cal miles for a 24-hour movement, 
can be reduced by 30 to 40 percent. 
This depends, however, on exploiting 
information from the meteorological 
satellite to obtain numbers — rather 
than impressions — concerning the 
physical character of the environment 
in which the hurricane or its seedling 

Basic Understanding of the 
Hurricane System 

While much has been learned 
about the hurricane, its structure, 
and the energetics that cause a seed- 
ling disturbance to develop, there 
remain notable gaps in the funda- 
mental understanding of the hurri- 
cane system. The first is the puzzle 
of why so few hurricanes manage 
to develop from the abundance of 
seedlings that parade across the tropi- 
cal scene. Secondly, the hurricane is 
basically an unstable system varying 
in intensity from day to day and 
even from one six-hour period to the 
next, but the reasons for these varia- 
tions are not understood. The whole 
concept of weather modification in 
hurricanes may depend on a better 
understanding of the natural instabil- 
ities in this delicately balanced sys- 

Answers to these questions will 
probably depend on a concerted pro- 
gram of field experimentation and 
numerical modeling. To pursue the 
problem only through numerical 
modeling is risky for the simple 
reason that, in so complex a system, 
the modeling problem becomes in- 
tractable unless there are extensive 
uses of approximations, parameteri- 
zations, and other mathematical sim- 
plifications which, while yielding in- 
teresting results, may only crudely 
simulate the real atmosphere. Ex- 
perience has shown that the best 
results come from a two-pronged 
program which, in step-wise fashion, 
produces a model for one facet of 
a development and then verifies the 
result of this simulation by field 
exploration in the real atmosphere. 

Prospects for Reducing the 
Hurricane Hazard 

Ideally, one would like to find some 
means to prevent all hurricane seed- 
lings from developing into severe 
storms while retaining the useful 
rainfall carried by these disturbances. 
Although many suggestions have 

been made for cloud-seeding or other 
cloud-modifying measures to curb 
the formation of hurricanes, none has 
comprised a physical hypothesis that 
has considered both the cloud proc- 
esses and the circulating properties 
of the cloud environment. 

It appears more and more likely 
that the formation of a hurricane is 
something of an accident of nature, 
at least with regard to the particular 
cluster of clouds in which the event 
occurs. In general, a storm center 
tends to form somewhere in an en- 
velope of rain clouds spread over 
hundreds of miles. But there is still 
no reliable means of predicting which 
particular cluster nature will pick to 
foster the growth of a storm center. 
Therefore, even if one knew precisely 
what modification techniques to ap- 
ply to a cluster of clouds (no more 
than 25 or 30 miles in diameter, for 
example) — and one does not know 
this yet — it would be impossible to 
know where to send the aircraft to 
conduct the seeding or take other 
preventive actions. 

Cloud Seeding: Project STORM- 
FURY — As for curbing the fury of 
the hurricane, it must be conceded 
that, at present, the only hope lies 
in identifying, and hopefully treading 
on, the "Achilles heel" of a delicately 
balanced storm system — its ability 
to release latent heat under certain 
circumstances. That is precisely what 
the Project STORMFURY hypothesis 
seeks to accomplish. 

While scientists do not yet fully 
agree on the benefits to be expected 
from systematically seeding hurri- 
canes or seeking in other ways to 
upset the balance of forces in the 
storm, those who have followed the 
STORMFURY experiments cannot 
help but be excited about the very 
encouraging results obtained in 1969 
from Hurricane Debbie. If the same 
order of response from cloud seeding 
is obtained in one or two additional 
experiments, it will be possible to 
demonstrate beyond a reasonable 


doubt that a significant reduction 
can be made in the destructive po- 
tential of hurricanes, including the 
damage due to hurricane tides, by 
strategic seeding of the eye wall. 

This is the most exciting prospect in 
all geophysical research and develop- 
ment, both because of the immediate 
potentialities for reducing property 
losses and saving lives in hurricanes 

and because the insight gained from 
this experiment should open the door 
to more far-reaching experiments 
aimed at modifying other threatening 
large-scale storms. 

A Report on Project STORMFURY: 
Problems in the Modification of Hurricanes 

Damage to property in the United 
States caused by hurricanes has been 
increasing steadily during this cen- 
tury. Hurricanes caused an average 
annual damage in the United States 
of $13 million between 1915 and 
1924. By the period 1960 to 1969, 
this figure had soared to $432 million. 
Hurricane Betsy (1965) and Hurri- 
cane Camille (1969) each caused more 
than $1.4 billion in damage. Even 
after adjusting these values for the 
inflated cost of construction in recent 
years, there remains a 650 percent 
increase in the average annual cost 
of hurricane damage in less than 50 
years. Since Americans are accelerat- 
ing construction of valuable buildings 
in areas exposed to hurricanes, these 
damage costs will probably continue 
to increase. 

The loss of life from hurricanes 
has been decreasing about as dra- 
matically as the damages have been 
increasing. This decrease in number 
of deaths can be attributed largely 
to improvements in hurricane warn- 
ing services and community prepared- 
ness programs. The reduction in loss 
of life is especially notable consider- 
ing that the population has been in- 
creasing in hurricane-vulnerable areas 
just as rapidly as the value of prop- 
erty. Figure V-4 illustrates the trends 
with time in damages and loss of 
life in the United States caused by 

When warnings are timely and 
accurate, lives can be saved by evacu- 
ating people to safe locations. Prop- 
erty damages can be reduced only 
by building hurricane-resistant struc- 
tures or by reducing the destructive 

potential of hurricanes. But the first 
solution may be quite expensive. 

Extreme destruction may result 
from any one of three different at- 

tributes of a hurricane: (a) the storm 
surge, associated ocean currents, and 
wave action along the coast; (b) the 
destructive wind; and (c) rain-created 
floods. The hurricane winds that 


Millions of Dollars 









1957-59 BASE 


Number of Deaths 

- 3500 



- 2500 





0"» "^ CT> ^=3- o~\ ^t at *=*■ ot ^" CT> 

i— i cm c\j co m ^3- 

m i/iiD ud 

Ot ^ Ot *=3" 0*1 

— id m Kn iX3 


<t fjn* oi ^t CTi ^ <Tt 
O O •— I *— I C\J C\J CO CO 

6 inoinoino in o lo o i-n o lo 


The bar graph shows the trends in loss of life and damage due to hurricanes. The 
damage figures have been adjusted to eliminate inflationary and other fluctuating 
trends in the cost of construction. 



sometimes approach 200 miles per 
hour may cause storm surges of 20 
to 30 feet or so, the development of 
strong coastal currents which erode 
the beaches, and the onset of moun- 
tainous waves. Once the latter three 
elements are in being, they are far 
more destructive than the winds and 
are usually responsible for the greater 
damage. Their destructive power 
varies directly with the speed of the 

Damage due to sea forces and to 
winds is concentrated along and near 
the seacoast; even the damage at- 
tributed to winds alone usually drops 
off drastically within a relatively few 
miles of the coast when a hurricane 
moves inland. Damage from rain- 
caused floods, on the other hand, may 
extend far into the interior and is 
particularly acute in mountainous 
regions traversed by the remnants of 
a hurricane. This is especially true 
in situations where rain-induced 
floods originate in mountains near 
the coast and arrive at the coastal 
plain before the ocean waters have 
receded. In view of the difficulty of 
building structures to resist all these 
destructive elements, efforts have 
lately concentrated on reducing the 
destructibility potential of hurricanes. 

If the present program for modi- 
fying hurricanes to reduce their in- 
tensity should prove effective, the 
potential benefit/cost ratio could be 
of the order of 100:1 or 1,000:1. It 
should be emphasized that the modifi- 
cation program has no intention of 
either "steering" or completely de- 
stroying hurricanes. The rainfall 
from hurricanes and tropical storms 
is an essential part of the water bud- 
get of many tropical and subtropical 
land areas, including the southeastern 
United States. The hope is to reduce 
a hurricane to a tropical storm by a 
reduction in the speed of the concen- 
trated ring of violent winds near the 
center, leaving the rainfall and total 
energy release of the over-all storm 
essentially unchanged. 

Details of the Project 

The groups active in Project 
STORMFURY, a joint effort of the 
U.S. Navy and the National Oceanic 
and Atmospheric Agency (NOAA), 
conducted experiments on hurricanes 
in 1961, 1963, and 1969. In each 
case, the objective was to reduce 
the maximum winds of the hurricane. 
The technique called for seeding a 
hurricane with silver iodide crystals 
in order to cause supercooled water 
drops to freeze and release their latent 
heat of fusion. In the earlier years, 
the experiments consisted of seeding 
a hurricane one time on each of two 
days. The results appeared favorable 
but were inconclusive, since the 
changes were of a magnitude that 
often occurs naturally in hurricanes. 

In August 1«60, the STORMFURY 
group seeded Hurricane Debbie five 
times in a period of eight hours on 
the 18th and 20th of the month, with 
no experiment on the 19th. Following 
the seedings, maximum winds at 
12,000 feet decreased within six 
hours by 31 percent on the 18th and 
15 percent on the 20th. The storm 
regained its original intensity on the 
19th. While changes of this mag- 
nitude have happened in hurricanes 
on which there was no experiment, 
they have been quite rare. When one 
considers the entire sequence of 
events in 18-20 August, one can say 
that such a series of events has not 
happened in previous hurricanes 
more than one time in 40. Thus, 
while we cannot state that the Debbie 
experiments proved that we know 
how to modify hurricanes, the results 
were certainly very encouraging. 

Along with the experimental pro- 
gram, there has been an intensive 
effort to develop models which simu- 
late hurricanes. The best of these 
models now reproduce many features 
of a hurricane quite well. One devel- 
oped by Rosenthal has been used to 
simulate seeding experiments, includ- 
ing the one performed on Debbie. 
The STORMFURY experiment was 
simulated by adding heat at appro- 

priate radii at the 500 and 300 milli- 
bar levels (approximately 19,000 and 
32,000 feet, respectively) over a pe- 
riod of ten hours. The amount of 
heat added was believed to be com- 
parable to the amount of latent heat 
that can be released by seeding a 
hurricane. Within six hours after 
cessation of the simulated seeding, 
the maximum winds at sea level 
decreased about 15 percent. The 
time-scale for the decrease in max- 
imum winds was roughly the same 
as that in the Debbie experiments. 

Evaluation of Results 

The net results of the various field 
experiments and the implications 
from modeling experiments give 
strong reason for believing that at 
least some degree of benefical modifi- 
cation was achieved in the Debbie 
experiments. Unfortunately, how- 
ever, we cannot say the matter is 
proved nor can we claim the results 
are statistically impressive at some 
high level of significance. 

The modeling results are most in- 
teresting and highly suggestive, but 
there are certain deficiencies in the 
model which require that one be 
cautious in interpreting them. First, 
a highly pragmatic parameterization 
of cumulus convection is used. Sub- 
stantial improvements in this area 
must await increased understanding 
of both cumulus convection and its 
interaction with larger scales of mo- 
tion. Second, the major simplifying 
assumption of circular symmetry 
used in the model precludes direct 
comparison between model calcula- 
tions and specific real tropical cy- 
clones. Real cyclones are strongly 
influenced by interaction with neigh- 
boring synoptic systems, and these 
vary markedly in character and in- 
tensity from day to day. 

When one looks at parameters 
other than the winds for further 
verification of seeding effect, either 
the data were not collected in Hur- 
ricane Debbie or insufficient data are 



available from previous storms to 
provide a clear definition of the nat- 
ural variability of the parameter. 
These points can be illustrated by 
discussing the various measurements 
that should be made. 

The following are either assumed 
by the modification hypothesis or are 
implied by results from the modeling 

1. In hurricane clouds, large quan- 
tities of water substance exist 
in the liquid state at tempera- 
tures lower than —4° centi- 

2. Introduction of silver iodide 
crystals into these supercooled 
clouds will cause the water 
droplets to freeze and release 
the latent heat of fusion. 

3. If the heat is released in the 
annulus radially outward from 
the mass of relatively warm air 
in the center of the storm, it 
should cause a temperature 
change that will cause a reduc- 
tion in the maximum tempera- 
ture gradients in the hurricane. 

4. A reduction in the mean tem- 
perature gradients must result 
hydrostatically in a reduction 
of the maximum pressure gradi- 
ent in the storm. 

5. A reduction in the pressure 
gradients should cause a reduc- 
tion in the maximum winds in 
the storm. 

6. The belt of maximum winds 
should migrate outward after 
the seeding has had time to 
affect the storm. This action 
presumably would be accom- 
panied by development of a 
larger eye, with the eye wall at 
a larger radius, or, possibly, a 
change in structure of the wall 

All of the above suggest certain 
measurements that should be made 

in the storm. If the changes in these 
parameters occur at the right time, in 
the right sequence, and with proper 
magnitudes, the cumulative evidence 
that the experiment was a success 
could be very convincing. Efforts 
were made to collect all of these data 
in Debbie. In some cases, however, 
the efforts were unsuccessful or the 
data do not permit conclusive deduc- 

An aircraft was equipped to make 
measurements of the character and 
amount of water substance in the 
lower levels of the supercooled layer 
in Hurricane Debbie. While attempt- 
ing to make the first pass across the 
storm at the 20,000-foot level, a 
supercharger malfunctioned and the 
aircraft was no longer able to main- 
tain that high an altitude. There are, 
however, some qualitative observa- 
tions which suggest there was a 
change in character of the water 
substance from predominantly super- 
cooled water to a mixture of ice and 
water. These observations are not 
at the right level or of sufficient 
detail and quality to document incon- 
trovertibly that the seeding accom- 
plished a major transformation in the 
liquid-ice budget of the clouds. This 
should not be interpreted to mean 
that the seeding failed to accomplish 
the desired effect, however. There 
are just insufficient data to convince 
a skeptic that the effect was actually 

Very detailed and frequent obser- 
vations of the temperature, pressure, 
and winds were made along diameters 
across the hurricane at the 12,000- 
foot level. From these data we can 
compute changes with time in the 
parameters of their gradients at any 
point along the diameter. The changes 
in the maximum wind speed have 
already been mentioned. 

The changes observed in tempera- 
tures and temperature gradients are 
not conclusive enough to support the 
above hypotheses. On the other 
hand, if the release of latent heat 
was in the layers above 18,000 feet, 

one should not expect dramatic 
changes in the temperature and its 
gradient at 12,000 feet. We have in- 
adequate temperature measurements 
in the layer between 18,000 feet and 
30,000 feet, since lack of properly 
instrumented aircraft precluded the 
acquisition of the type and quantity 
of data needed. Furthermore, results 
from the seeding simulation experi- 
ment conducted with the model sug- 
gested that the added heat is rapidly 
dispersed and dramatic changes in 
the temperatures are not likely to 

The changes in the pressure and 
pressure gradients measured at 12,000 
feet do give some support to the 
success of the seeding and some indi- 
cation that results conformed to the 
hypothesis. But the great amount of 
noise in the variations of this param- 
eter and lack of adequate knowledge 
concerning natural variations in hur- 
ricanes make it impossible to say the 
case is proved. Once again, the in- 
dications are positive but inconclu- 

Intensive efforts were made to get 
continuous coverage of the structure 
of the storm by airborne radar and 
by the ATS-3 satellite. This was 
done with the hope that these data 
would reveal the nature and time 
of changes in the cloud structure 
that might be caused by the seeding. 
The radar pictures suggest that the 
eye size did become larger after the 
seedings; the changes in size even 
appeared to have a periodicity sim- 
ilar to that of the seedings: about an 
hour and a quarter after several of 
the seedings there was a rapid in- 
crease in the area encompassed by 
the wall cloud. 

One must be cautious, however, in 
placing too much emphasis on this 
evidence. The eye wall was pulsating 
during most of the time the STORM- 
FURY crews were monitoring it, so 
there were many changes in size, 
shape, and character of the eye be- 
fore, during, and after the seeding. 
There were also many problems with 



the data. No single radar monitored 
the storm during the entire seeding 
operation, and it was necessary to 
use various radars to obtain a con- 
tinuous time record of the eye area. 
After considering the many prob- 
lems of interrelating various radars, 
calibrating ranges, distortions, etc., 
one can only conclude that there is 
some evidence that the seeding did 
indeed affect the hurricane clouds 
around the eye in the manner hy- 
pothesized, but that the data are 
of such a heterogeneous nature as to 
be inconclusive in themselves. 

Pictures of Hurricane Debbie were 
taken by ATS-3 each day of the 
period, 17-21 August. Normally, 
processed pictures do not reveal 
much detail of the seeded areas. 
Although enhanced pictures were 
made along lines suggested by Fujita, 
they have not yet been developed. 
Work with a small sample of these 
pictures suggests that we will obtain 
some interesting information about 
changes in the cloud structure of 
the storm, though it is unlikely that 
these pictures will be adequate for 
determining with confidence whether 
the seeding had a major effect on the 

Wind-field measurements did show 
that the radius of maximum winds 
increased following the seeding. 

Requirements for Future Activity 

The use of theoretical models to 
study the modification hypotheses 
was discussed in the previous section. 
Some deficiencies of the present mod- 

els were also mentioned. We should 
use the present models to learn as 
much as possible about the interac- 
tions and potential instabilities of 
hurricanes, but we should also con- 
tinue experiments to develop further 
information as to how well the mod- 
els simulate actual hurricanes. At 
best, they can do this only in a mean 
sense. We should also continue work 
to remove the restrictive assumptions; 
these relate to circular symmetry, 
interaction between the hurricane 
and synoptic-scale features in the 
environment, dynamics of cumulus 
clouds, and interactions between the 
hurricane scale of motion and circu- 
lations of smaller scale. The matter 
of parameterizing cumulus processes 
in the model must be re-examined 
and carefully compared with cumulus 
models and observations. A more 
closely spaced grid should be used 
in the eye-wall region. And, finally, 
the outer radial boundary (now at 
440 km) should be moved outward 
and other outer boundary conditions 
investigated to make sure they are 
not determining or markedly affect- 
ing the solutions following the "seed- 

When the field experiments are 
repeated, every effort should be made 
to obtain data that will permit veri- 
fying various steps related to the 
seeding hypotheses. These were dis- 
cussed in the preceding section. Fa- 
cilities and manpower are not avail- 
able at the present time to obtain all 
of these data. 

In summary, the present status of 
our scientific knowledge suggests 

quite strongly that techniques pres- 
ently available are adequate to 
achieve beneficial modification of 
mature hurricanes. Data from experi- 
ments and theoretical studies support 
each other, but in each case there 
are gaps in our knowledge which 
suggest we should be cautious in 
making extreme claims. What is clear 
is that we should repeat the Debbie- 
type experiments on other hurricanes 
as soon as possible to see if we can 
duplicate the Debbie decrease in wind 
speeds and to document details of 
the effects. We should continue our 
theoretical investigations to remove 
some of the limiting assumptions. 

With losses from hurricanes in the 
United States currently averaging 
over $400 million per year and loss 
of life still a threat, action should 
be taken as soon as possible. Since 
the prospects seem good that we can 
reduce the destructive power of hur- 
ricanes, the need for additional ex- 
periments becomes much more ur- 

If present techniques are adequate 
for modifying a hurricane, it is quite 
likely that we can collect enough 
information during the next one or 
two years to justify application of 
the experiments to storms expected 
to affect the coastline. If present 
techniques are inadequate, we have 
several other approaches which should 
be explored. The time needed to 
develop and test better hypotheses 
or to improve and exploit the present 
hypotheses suggests that we should 
plan five to ten years ahead. 

The Scientific Basis of Project STORMFURY 

Project STORMFURY is concerned 
with the problem of devising ex- 
periments to modify hurricanes and 
tropical cyclones. Because the design 
and evaluation of such experiments 
depends essentially on understanding 
the structure and behavior of "nat- 
ural" hurricanes, the close associa- 

tion of the project with the National 
Hurricane Research Laboratory of the 
National Oceanic and Atmospheric 
Administration is appropriate. The 
impetus for such experiments arises 
primarily from the large potential 
benefits, in the form of reduced prop- 
erty damage and loss of life, which 

could be realized from relatively small 
modifications of the intensity or mo- 
tion of these storms. 

During the past decade, increased 
understanding of hurricanes, based 
on both descriptive and theoretical 
studies, has suggested at least two 



possible avenues of achieving benefi- 
cial modification. Utilization of the 
approach with the sounder basis of 
scientific understanding has so far 
been precluded by logistic considera- 
tions. The second approach, which 
involves complex but feasible logis- 
tics, has been used in experiments 
on three hurricanes with encouraging 
but not yet definitive results even 
though the detailed physical basis 
for the approach is not completely 

Present Scientific Status 

Special observational efforts and 
more intensive theoretical studies 
during the past twenty years have 
led to important advances in the 
understanding of the physics of hur- 
ricanes, but significant gaps remain 
to be filled. Preliminary efforts at 
constructing mathematical models of 
the hurricane have been encouraging, 
but serious defects remain. 

Data Base — For hurricanes in the 
mature stage and in dissipating stages 
over land, the descriptive data base 
is good in the qualitative sense. The 
principal data deficiencies consist of 
quantitative measurements of such 
items as: the distribution of water 
in all phases as a function of tem- 
perature in the storm; the fluxes of 
heat and water vapor from the sea 
to the air under the extreme condi- 
tions present in the hurricane; and 
the natural variability of various 
meteorological parameters in the 
inner regions of the hurricane as a 
function of time-scales ranging from 
an hour to a day or two. 

Basis for Modification — The most 
significant addition to our scientific 
knowledge of hurricanes in recent 
years has been the convergence of 
both theoreticians and empiricists on 
the concept that the hurricane is the 
complex result of the interaction of 
physical processes on several dis- 
tinctly different scales. It is now 
agreed that these storms, whose 
space-scale of a few hundred kilo- 
meters and lifetime of a few days 

typify the synoptic-scale of atmo- 
spheric systems, depend critically on 
microscale (1 to 10 meters) turbulent 
motions of the surface boundary 
layer for the addition of heat and 
water vapor from the sea surface, and 
on mesoscale convective clouds, pri- 
marily organized in the annular ring 
surrounding the eye, for release of 
the latent heat of water vapor as the 
primary driving mechanism of the 
storm. Furthermore, the combined 
processes on these scales are influ- 
enced by interactions with much 
larger scale systems of the atmos- 

It is this dependence on microscale 
turbulence and mesoscale convection 
that has suggested the two avenues 
to modification. Reduction of the 
evaporation associated with the for- 
mer would certainly result in reduc- 
tion of hurricane intensity, but this 
approach to modification has been 
prevented by insurmountable logistic 
problems. Redistribution of the 
latent heat release associated with 
the latter through the use of cloud- 
seeding techniques shown to influ- 
ence the structure and dynamics of 
convective clouds is logistically feas- 
ible and has been employed in ex- 
periments on a small number of 
hurricanes. There are residual un- 
certainties and disagreements as to 
the correct seeding techniques and 
the interpretation of the experimental 

Theoretical models of the hurricane 
incorporating the various scales dis- 
cussed above with varying degrees 
of simplification have been developed. 
Results of computer simulations 
based on these models indicate qual- 
itative success in modeling the physi- 
cal processes responsible for the 
formation and maintenance of the 
hurricane. But significant quantita- 
tive uncertainties remain. Further- 
more, present models cannot con- 
tribute significantly to problems of 
hurricane motions. 

Interactions — Our present scien- 
tific knowledge and understanding 

of the interaction of hi 
other aspects of the atmospheric gen- 
eral circulation, with other environ- 
mental systems such as the ocean, 
and with man and society are qualita- 
tive and inadequate. For example, 
it is known that rainfall associated 
with hurricanes is often of consider- 
able economic benefit, but it can also 
lead to disastrous floods. We do not 
know how the atmospheric circula- 
tion would change if hurricanes did 
not exist. Nor is it decided who in 
society is to decide when and where 
hurricane modification should be at- 

Requirements for Scientific 

Significant scientific controversy 
exists with respect to the following 
aspects of hurricane modification: 

1. Can the effects of seeding ex- 
periments be unequivocally de- 
tected against the large natural 
variability of hurricanes? 

2. How, exactly, does cloud seed- 
ing redistribute latent heat re- 
lease and how is this redistribu- 
tion responsible for decreases 
in hurricane intensity? 

3. Are the present mathematical 
models and associated com- 
puter simulations of hurricanes 
sufficiently realistic to serve as 
indicators of differences in ex- 
pected behavior of natural and 
seeded hurricanes? 

4. Are the amounts of super- 
cooled liquid water necessary 
if seeding techniques are to 
result in significant redistribu- 
tion of latent heat release ac- 
tually present in the correct 
portions of the storm, and is 
this water actually frozen by 
the seeding? 

The most urgently needed scien- 
tific advances fall into two categories: 
observations and theoretical model- 



ing. Observations are needed to 
document more thoroughly the nat- 
ural variability of hurricanes; to de- 
termine the distribution of water in 
all its phases in the inner portions of 
both natural storms and before, dur- 
ing, and after seeding in experimental 
storms; and to quantify further the 
interactions among physical processes 
on the various scales important to 
hurricanes. Theoretical models and 
associated computer simulations need: 
(a) to be improved in the way in 
which smaller-scale processes are 
treated implicitly through parameter- 
ized relationships; (b) to be gen- 
eralized such that the effects of in- 
ternal processes on the motion of 
the storm can be treated; and (c) to 
utilize improved observations as 

varying boundary and initial condi- 
tions for the models. 

Time-Scale — The urgency of sat- 
isfying these needs is undoubtedly 
relative. In terms of clarifying the 
scientific basis for Project STORM- 
FURY, the need is very urgent. To 
substantiate the encouraging, but 
inconclusive, results from past ex- 
perience and, thereby, provide a solid 
foundation for modification experi- 
ments on storms threatening inhab- 
ited coastlines, their importance can- 
not be overemphasized. 

These advances in scientific back- 
ground are needed within one to two 
years. Instrumentation and observa- 
tional platforms needed to fill most 
of the known gaps in the scientific 

data base for both natural and experi- 
mental hurricanes are available. Sim- 
ilarly, significant improvement in 
computer simulation is possible with 
existing computers. 

Legal Implications — The greatest 
potential policy problems associated 
with hurricane modification will arise 
from the legal questions that will be 
raised at both national and interna- 
tional levels when modification ex- 
periments are carried out on storms 
which shortly thereafter affect in- 
habited coastal regions or islands. 
When and if we are able to predict 
what will result from such modifica- 
tion attempts, who will make the 
decisions? A study of these problems 
is sorely needed. 

A Note on the Importance of Hurricanes 


Our understanding of the physical 
laws governing the behavior of the 
atmosphere has not advanced to the 
point where we can deduce from 
these laws that hurricanes, or any 
tropical circulation systems resem- 
bling hurricanes, must occur. It is 
just reaching the stage where we can 
deduce theoretically that systems of 
this sort may occur. Recent numeri- 
cal experiments aimed at simulating 
hurricanes have produced cyclonic 
circulations of hurricane intensity 
from initial conditions containing 
weak vortices. Other experiments 
aimed at simulating the global circu- 
lation have produced concentrated 
low-pressure centers within the trop- 
ics, but the horizontal resolution 
has been so coarse that it is impos- 
sible to say whether the models are 
trying to simulate hurricanes. 

Nevertheless, from our general 
knowledge of atmospheric dynamics 
together with the observation that 
hurricanes do occur and continue to 
occur year after year, we can safely 
conclude that hurricanes not only 

may but must occur if nature is left 
to its own devices. We could make 
a similar statement about other at- 
mospheric motion systems (e.g., tor- 
nadoes) that occur repeatedly. 

Such reasoning does not apply to 
everything that is observed in nature. 
It would be incorrect to conclude, 
for example, that a particular species 
of animal is necessary simply because 
it exists. If we should destroy all 
members of the species, there is no 
assurance that evolutionary processes 
would ultimately create the same 
species again. However, hurricanes 
are not a species; new hurricanes are 
not ordinarily born of old ones. On 
the contrary, they, or the weaker 
tropical disturbances that mark their 
origin, appear to be spontaneously 
generated when the proper distribu- 
tions of atmospheric temperature, 
moisture, wind, oceanic temperature, 
and probably certain other quantities 
occur in the tropics on a worldwide 
or ocean-wide scale. 

Strictly speaking, therefore, we 
should modify the statement that 
hurricanes are necessary by saying 

that they are necessary only if the 
larger-scale conditions characterizing 
the tropical environment are main- 
tained over the years. The absence 
of hurricanes in the southern Atlantic 
Ocean is presumably due to the 
local absence of favorable large-scale 
conditions, as is the relative scarcity 
of hurricanes in other oceans during 
the winter season. 

What If Hurricanes Could Be De- 
stroyed? — Assuming that the tropi- 
cal environment is favorable to the 
formation of hurricanes, the latter, 
in forming, will exert their own ef- 
fects on the environment. Hurricanes, 
by virtue of the active cumulonimbus 
clouds that they contain, are effective 
in transporting large amounts of heat 
and moisture upward to high levels. 
They may also carry significant 
amounts of heat, moisture, and mo- 
mentum from one latitude to another. 
In any event, they act to alter the 
environment; in the long run, their 
effect on the environment must be 
exactly canceled by that of other 

Suppose, then, that nature is not 
allowed to take its course. Suppose 



that we possessed the means, not for 
directly altering the large-scale con- 
ditions that favor the development of 
hurricanes, but for destroying each 
hurricane individually during its 
formative stages, soon after its initial 
detection. In the hurricane-free world 
that we would have temporarily cre- 
ated, the effects of hurricanes on the 
environment would no longer cancel 
the other effects and the environment 
would proceed toward a different 
state of long-term statistical equi- 

Very likely, the new environment 
would be more favorable for the 
natural development of hurricanes 
than the old one. This would be true 
if one of the natural effects of hurri- 
canes is to remove from the environ- 
ment some of its hurricane-producing 
potential, as would be expected if 
the hurricane is an instability phe- 
nomenon. Perhaps a super-hurricane 
would then try to form to do the 
work of the ordinary ones that were 
suppressed; perhaps it would not. 
In any event, the task of artificially 
removing the hurricanes one by one, 
if such a task can be visualized at all, 

would become even more difficult 
than it had originally been. 

Beneficial Effects 

The most frequently cited bene- 
ficial effect of hurricanes is probably 
the rainfall that they supply to certain 
areas, with its obvious value to agri- 
culture. A familiar example of such 
an area is the southeastern United 
States, where a fair fraction of the 
total annual rainfall is supplied by 
tropical storms. Yet even if this 
region were deprived of all its hur- 
ricanes, there would still be ample 
rainfall left to support other crops 
not presently raised in this region. 
This leads us to suggest that the 
principal beneficial effect of hurri- 
canes may be to help preserve the 
climatic status quo — a status quo 
which the hurricanes themselves have 
helped to create. 

To appreciate the value of preserv- 
ing the status quo, let us suppose that 
two regions of the United States, each 
possessing a reasonably satisfactory 

climate, could somehow suddenly ex- 
change climates with one another. 
The climatic statistics of the United 
States as a whole would then be 
unaltered. Yet the average climate of 
the United States would be worse, 
because the climate would be "worse" 
in each of the two regions in question. 
That is, the new temperature and 
rainfall regime in each region would 
presumably be unfavorable to the 
plant and animal life existing there, 
especially to the crops, and very 
likely also to many aspects of human 
culture. The new climates would 
favor new flora and fauna, and after 
a sufficient number of years those 
in one region might become effec- 
tively interchanged with those in 
the other. But during the period of 
adjustment there would be a net loss. 

Since hurricanes exert a modifying 
influence on the larger-scale tropical 
environment, a further effect of hurri- 
canes is to help preserve the climatic 
status quo throughout the tropics, 
even in those areas not frequented 
by heavy hurricane rains or violent 
hurricane winds. Here, too, the effect 
may be beneficial. 

Geomorphological Effects of Hurricanes 

The morphologic changes induced 
by hurricanes are concentrated along 
seacoasts and the shores of large 
estuaries. As they move inland, few 
major tropical cyclones encounter at- 
mospheric conditions necessary to 
maintain their destructive violence 
for as much as 100 miles. Only 
rarely are they capable of retaining 
their structures when crossing land 
areas, as from the coast of the Gulf 
of Mexico to New England or to the 
Canadian border. 

Hurricane Camille (1969) — shown 
in Figure V-5 — reached the Gulf 
Coast as the most intense hurricane 
ever reported, breaking records for 
barometric depression and wind ve- 
locities and bringing tragic devasta- 
tion to the coast of Mississippi. It 

retained its identity for an exceptional 
distance, causing excessive rainfall 
and flooding that did considerable 
damage in West Virginia and south- 
western Virginia the day after leaving 
Mississippi. And yet, Camille caused 
few morphologic changes of any con- 
sequence. It effected many short- 
lived, minor physical changes on is- 
lands in Louisiana and Mississippi, 
but in comparison with losses in hu- 
man and animal life and with destruc- 
tion of property, the physical changes 
were trivial. 

Effects of Differing Coastal 

Morphologic changes resulting 
from hurricanes depend mainly on 

the physical characteristics of the 
coasts involved. Three examples will 
illustrate the relationships: 

Plum Island, Massachusetts, expe- 
rienced the impact of Hurricane Carol 
(1954). A detailed line of levels had 
been surveyed across the marshes 
behind the island, the coastal dunes, 
and the island's beach. This survey 
was completed the day before Carol 
arrived. On the morning following, 
the beach was broadened and reduced 
as a result of wave erosion to a level 
well below that determined by the 
instrumental survey. Three days 
later, however, most of the beach 
had been restored, and within a few 
days following its profile had re- 
turned essentially to its pre-hurricane 



Figure V-5 — HURRICANE CAMILLE, 1969 

Hurricane Camille on August 17, 1969, in addition to being very intense, covered an 
extremely large area as shown in this segment of a satellite picture from the geo- 
stationary satellite ATS 3. A geostationary satellite is fixed relative to the earth 
and so is able to photograph the same area once every 25 minutes. Camille was 
first observed as a large area of cloudiness over the Lesser Antilles. It was tracked 
for over a week before it hit the Mississippi coast with 190-mph winds and 30-foot 
tides. Even though adequate warnings were given, many people were killed as a 
result of coastal flooding. 

Mauritius, during the southern- 
hemisphere summer of 1960, felt the 
effect of Hurricane Alix, which passed 
close to its west coast in January, 
and the full impact of Hurricane 
Carol in February. Carol was accom- 
panied by the lowest barometric de- 
pression and most violent winds, as 
well as the greatest economic loss, 
ever experienced in the southwestern 
part of the Indian Ocean. The path 
of Carol was such that the 1,200 
square mile area of Mauritius was 
completely covered by the passing 
eye of the storm. 

It happened that six months earlier 
a field party of the Coastal Studies 
Institute of Louisiana State Univer- 
sity had completed an intensive study 
of the vegetation, landforms, and 
beaches of the entire coast. Following 
Carol, field parties returned in 1960 
and again in 1963 to assess changes. 
As a great number of photographs 
had been taken during the first visit, 
an opportunity was afforded for tak- 
ing subsequent photographs from 
identical positions with the original 
camera. Many individual plants were 
re-located, and their conditions were 

compared on a basis of pre-hurricane, 
a-few-months-later, and three-years- 
later investigations. The photographs 
and other comparisons demonstrated 
very minor physical changes, an im- 
mense upset in the exotic flora, and 
the rapid recovery of endemic vegeta- 

Louisiana, in June 1957, experi- 
enced the direct impact of Hurricane 
Audrey, a storm that caused the 
greatest loss of life and property 
damage of any early-summer hurri- 
cane on the Gulf Coast. The coastal 
marshes were flooded to almost rec- 
ord depths of as much as 13 feet. 
The surge of sea water removed 
practically all beach sand and shell 
for about 100 miles along the coast 
of western Louisiana. Loss of this 
thin, protective armor exposed readily 
eroded marsh sediments to wave ero- 
sion, which was responsible for ac- 
celerated coastal retreat for as long 
as four years, after which effective 
beaches accumulated. 

In 1953, a field party had been 
engaged in the study of a coastal 
mudflat that began to form in 1947. 
The party had implanted 25 monu- 
ments as reference points for that 
number of surveyed cross sections. 
Most of these survived the onslaught 
of Audrey and were used to monitor 
coastal retreat at several-month in- 

The most spectacular geomorphic 
event related to the hurricane was 
the lifting, shifting, and deposition of 
two huge masses of mudflat sediment 
during the storm surge. These de- 
posits were separated by about 19 
miles. The western mass had a max- 
imum length of 12,350 feet; the east- 
ern deposit, 11,350 feet. The respec- 
tive widths were 1,050 and 1,000 feet. 
Each overlapped the shore and ex- 
tended inland about 2,051 feet, with 
an original thickness of 11 inches. 
Several months later, after drying, 
each mass had formed a sharply 
bounded, dense sheet of gelatinous 



clay up to 6 inches thick; they are 
permanent additions to the marsh 

Some Generalizations 

The three specific examples given 
here justify several generalizations 
that can be substantiated by many 
other case histories: 

1. Catastrophic as they are from 
human, biological, and eco- 
nomic standpoints, in most in- 
stances hurricanes result in 
only minor and ephemeral geo- 
morphic changes, and these are 
confined to coasts. 

2. A coast where durable rock is 
exposed to the violence of 
storm attack (Mauritius ex- 
ample) suffers negligible physi- 
cal change. 

3. A coast flanked by deep water 
close to the shore (Plum Island 
and Mauritius examples) is af- 
fected mainly by high seas. 
Unconsolidated materials such 
as beaches and sand dunes ex- 
perience abrupt changes, but 
these last for only short periods 
of time. 

4. A coast flanked by a broad, 
gently inclined continental 
shelf, with a long fetch across 
shallow bottoms, suffers 
changes associated with flood- 
ing (Louisiana example). 

Hurricane Carol (Mauritius) 
brought a storm surge that registered 
only about 33 inches above expected 
level on the tide gauge at Port Louis. 
The island is surrounded by deep 
water. Hurricane winds generated 
high seas along all shores, however, 
and it was these that accounted for 
physical and biological changes. Much 
the same experience was associated 
with another Hurricane Carol (Plum 
Island). At Plum Island, the 10- 
fathom isobath hugs the shore 
closely, and a depth of 50 fathoms 

lies only 6 miles out. In contrast, in 
southern Louisiana the 10-fathom 
isobath lies about 43 miles from the 
shore, and the 50-fathom depth lies 
some 118 miles out. Hurricane surges 
are low over open ocean and are not 
significant aboard ship, but they rise 
to 15 feet or more when their rate of 
forward advance is reduced by shear 
or friction, creating greater and 
greater turbulence and more vigorous 
internal waves as they travel across 
wide, gently rising bottoms, especially 
at shallow depth. 

Although not much coastal change 
ordinarily occurs when water attains 
a depth of more than 5 fathoms 
within a short distance, too much de- 
pendence should not be attached to 
this relationship. With gently in- 
clined bottoms, offshore surges may 
grow to proportions that create ex- 
tensive flooding. These surges con- 
tinue for long distances, both across 
shallow bottoms and adjacent coastal 
lowlands. Even in the extensive and 
shallow area east of New Orleans, 
local inhabitants identify channels in 
the marsh and cuts across linear 
islands as having resulted from hur- 
ricanes in 1915 and 1925. A popular 
resort on Isle Derniere, south of New 
Orleans and landward about 27 miles 
from the 10-fathom isobath, was 
wiped out with tragic consequences 
in 1856, when the position of the 
low sandy spit on which it was built 
was shifted westward. 

Hurricane Protection: 
Problems and Possibilities 

An individual hurricane arrives as 
a possibly catastrophic event, one 
that is likely to be considered unique 
in the minds of people affected. The 
fact is, however, that the storm is but 
one of a recurring series that reach 
the region at highly irregular inter- 
vals. Hurricane arrivals are as un- 
certain as those of impressive earth- 
quakes. Although the present state 
of the art does not justify exact fore- 
casts concerning either, except for 
short terms in the case of hurricanes, 

both meteorologists and seismologists 
recognize that there are definite hur- 
ricane- and earthquake-prone regions. 
Eventually, it may be possible to 
educate people living in them to 
recognize that they must protect 
themselves against potential catas- 

Most hurricanes reaching the 
United States originate either be- 
tween the Azores and Cape Verde 
Islands or else in the Caribbean. 
There is no evidence that any origi- 
nate within 6" of the equator. In 
most cases they are first identified 
in latitudes between 10 and 20° 
north. The shores of the Gulf and 
Atlantic coasts, from Brownsville, 
Texas, to Lubec, Maine, are every- 
where vulnerable to hurricane attack. 
Tracks are particularly concentrated 
near Puerto Rico and Florida, but 
extreme damage has occurred around 
all parts of the Gulf of Mexico and 
up the Atlantic seaboard at least as 
far as Cape Cod. 

Defense against events such as 
hurricanes, tornadoes, earthquakes, 
and destructive volcanic activity is 
most effective in places where dis- 
aster strikes most frequently. Cy- 
clone cellers have undoubtedly saved 
many lives in the American Middle 
West. The Japanese have done well 
in designing structures that withstand 
intense earthquake tremors. 

Practically all serious damage re- 
sulting from hurricanes is caused by 
human mistakes. Protective beaches 
are mined for sand, shell, or gravel. 
Sand dunes, among nature's most 
effective coastal protectors, are bull- 
dozed away to level land for building 
sites or even to enhance seascape 
views. A trip along any part of the 
Atlantic coast between Florida and 
Cape Cod soon after a hurricane will 
demonstrate gross variations in dam- 
age, depending on whether beaches 
or dunes had been altered seriously. 
Cities and towns suffer most, not only 
because they are concentrations of 
people and buildings but also from 
the fact that they have introduced 



many more "improvements" that de- 
stroy or upset natural conditions. In- 
tervening rural areas are left rela- 
tively untouched, particularly if their 
coastal sand dunes have been left in- 

It is difficult to convince people 
that hurricanes bring most disastrous 
results to places near disturbed 
beaches and sand dunes, and that 
substantial buildings reduce losses of 
life immensely. Hurricane Camille 
evidenced tremendous contrasts be- 
tween the minor damage to substan- 
tial buildings and the destruction of 
shoddy structures, however nicely 
adorned. Great loss of life occurred 
in hotels and motels with inadequate 
framework, the buildings being held 
together mainly by wallboard or in- 
sufficiently bonded partitions of thin 
concrete blocks. Surges up to twenty 
feet high did relatively little damage, 
however, to buildings with adequate 
frames, whether of wood or steel. 
Trailer courts were wiped out, even 
several blocks back from the shore, 
while old homes with good construc- 
tion withstood the surge much better 
even where they were located on or 
near the Gulf of Mexico. 

While the number of seashore 
buildings anchored on effective pil- 
ings often increases for some years 
after a hurricane, this is not always 
true. After Hurricane Audrey, nearly 
all new houses were built on concrete 
slabs at ground level, following the 
dictates of a current style rather than 
in anticipation that the buildings will 
probably be flooded by several feet 
of seawater within a decade or two. 
People appeared to assume that Au- 
drey would be the last hurricane to 
strike the coast of southwestern 

The National Weather Service per- 
forms an invaluable service in pro- 
viding hurricane watches, alerts, and 
warnings, each of which becomes 
progressively more specific about 
time of arrival and width of danger- 
ous impact as the storm nears the 

mainland coast. But to what extent 
has public confidence been created? 
For some reason the people in a small 
but active community on Breton Is- 
land (east of the Mississippi River 
Delta) heeded a hurricane warning in 
1915. The buildings in the commu- 
nity were totally destroyed, and have 
not been rebuilt, but every inhabitant 
was evacuated before the storm 
struck, without the loss of a single 
life. In 1957, on the other hand, few 
people heeded timely, adequate warn- 
ings of the approach of Hurricane 
Audrey toward the Louisiana coast. 
Many hurricanes had brought storm 
surges to the area, but all had been 
lower than the elevation of the higher 
land in the vicinity (about 10 feet). 
Hurricanes were an old story. Most 
of the people remained at home and 
were totally unprepared for vigorous 
surges that swept as much as three 
feet across the highest land in the 
vicinity, causing tremendous loss of 
life and property. On several occa- 
sions during the past thirteen years 
people have evacuated the region 
as soon as early warnings have been 
issued, but in no case did a dangerous 
surge occur. Will these experiences 
result in destroying confidence in 
warnings by the time that the next 
potential disaster appears? 

Awareness of danger is almost im- 
possible to maintain for disasters that 
recur a generation or more apart. 
Probably the most effective hurri- 
cane-protection measures result from 
legal actions, at state and local levels, 
such as the formulation and enforce- 
ment of adequate building codes, pro- 
vision for rapid evacuation, mainte- 
nance of reserve supplies of fresh 
water for domestic use, well-con- 
structed sanitary systems, and the 
availability of carefully planned 
health and emergency facilities. 

Needed Scientific Activity 

In their pristine condition, factors 
associated with the destructive effects 
of hurricanes are in reasonable equi- 

librium with those that resist geo- 
morphic change. Scientific knowledge 
about hurricane origins, mechanics, 
physics, and behavior slowly in- 
creases, as does knowledge concern- 
ing the destruction or alteration of 
shoreline landforms and the accumu- 
lation and transport of near shore 
sediment. The effects of upsetting 
natural environmental conditions may 
be forecast with considerable qualita- 
tive precision. 

In order to understand more com- 
pletely the relations between hurri- 
canes and their physical effects on 
coastal lands, the following suggested 
activities appear to be pertinent: 

1. Accelerating the Weather Serv- 
ice's program of hurricane 
tracking and its ability to fore- 
cast the intensity and time of 
arrival of individual storms and 
to designate the coastal areas 
most likely to suffer. 

2. Encouragement of studies by 
coastal morphologists to iden- 
tify areas where physical 
changes are imminent, with em- 
phasis on man-induced causes, 
in the hope that they may be- 
come expert in assessing the 
results of undesirable practices. 

3. Creation, on a national level, 
of a group charged with moni- 
toring proposed activities of 
U.S. Army and other coastal 
engineers from the standpoint 
of assessing probable long-term 
changes that designs of de- 
fenses against the sea are likely 
to induce. This should be a 
cooperative, rather than strictly 
policing, activity. There is tre- 
mendous need for better com- 
munication between scientists 
and engineers. Scientists need 
to be better informed about en- 
gineering design practices, and 
engineers need better under- 
standing of the conclusions of 
basic scientific research. 



Status of Tornado Research 

Tornadoes are among the smallest 
in horizontal extent of the atmos- 
phere's whirling winds, but they are 
the most locally destructive. Al- 
though they are occasionally reported 
from many places, it is only in the 
United States that very intense tor- 
nadoes occur frequently. A typical 
intense tornado accompanies an 
otherwise severe thunderstorm, lasts 
about 20 minutes, and damages an 
area a quarter of a mile wide along 
a 10-mile path toward the northeast. 
The maximum winds (never accu- 
rately measured) are probably be- 
tween 175 and 250 miles per hour, 
but damage is caused as much by a 
sudden drop of pressure, amounting 
in extreme cases to about 0.1 of the 
total atmospheric pressure, or 200 
pounds per square foot. Especially 
when structures are poorly vented, 
roofs and walls are moved outward 
by the higher pressure within; then, 
as their moorings are weakened, they 
are carried off horizontally by the 

During the past 15 years, about 
125 persons have been killed an- 
nually by tornadoes. Average prop- 
erty damage has been about $75 
million. These figures may be com- 
pared with estimated losses owing to 
lightning, hail, and hurricanes as 
shown in Figure V-6. 

The high tornado death rate in 
relation to property loss is attribut- 
able partly to our inability to warn 
effectively against impending torna- 
does. A tornado is a very destructive 
phenomenon, but it usually exists for 
only a short time and affects only 
the thousandth part of a region cov- 
ered by tornado-spawning thunder- 
storms. Extreme variability is an 
essential characteristic. Most tornado 
losses are associated with just a few 

storms that utterly destroy the struc- 
tures in significant portions of urban 
areas or in whole small communities. 
These events, sudden and never fore- 
shadowed more than a few hours in 
advance, leave the survivors stunned 
amid desolation; they call for a sud- 
den focused response, of a magnitude 
akin to that demanded in war, by 
the affected community and by state 
and national governments. 

Tornado Prediction 

We have noted that the typical 
tornado accompanies an otherwise 
severe thunderstorm. Severe thunder- 
storms are themselves hazards and 
demand public forecasts, and the 
possibility of tornadoes is usually 
indicated when severe thunderstorms 
are predicted. 

Our forecasts, which must start 
from a description of the present state 
of the atmosphere, are less specific 
than we would like. This lack of 
specificity is associated in part with a 

lack of knowledge, but also with 
observations that are too sparse to de- 
scribe atmospheric variability on the 
scale of tornado or thunderstorm phe- 
nomena. Thus, the extent of a severe 
thunderstorm is 10 to 20 miles and 
the lifetime of a storm system is 
generally about six hours. But the 
distance between first-line surface 
weather stations is about 100 miles, 
and between upper air stations about 
150 miles. Observations are made 
hourly at the surface stations (more 
often under special conditions) but 
usually at only 12-hour intervals at 
the upper air stations. Therefore, 
even if our knowledge were otherwise 
adequate to the task, the observing 
system would limit us to indicating 
the probability of thunderstorms in 
regions much larger than the storms 

At present, tornadoes are fore- 
shadowed from one to six hours in 
advance, for periods of about six 
hours and in regions of about 25,000 
square miles. About 50 percent of 
such predictions are correct, with the 


Average Annual 


rage Annual Property 

Type of Storm 

Deaths in U.S.* 

Damage in U.S.* 



$ 75 million 



100 million 



150 million 



500 million 

*Based on data from 1955-1970 

Loss of life is almost four times greater from severe storms than from hurricanes, 
while property damage is less than one-half as great. 



incorrect forecasts being nearly di- 
vided between cases without tor- 
nadoes and cases with tornadoes 
outside, but near, the predicted re- 
gions. It should be noted that the 
climatological expectancy of torna- 
does during six hours in a randomly 
selected 25,000-square mile area in 
eastern and central United States is 
only about one in 400. Plainly, then, 
present forecasts give evidence of 
considerable skill in identifying the 
meteorological parameters associated 
with severe storms and tornadoes 
and in correctly anticipating their 

Briefly stated, the storm-forecast- 
ing parameters are warmth and mois- 
ture in a layer about 5,000 feet deep 
near the earth's surface, with a cool 
dry region at intermediate levels, 
strong winds in the upper atmos- 
phere, and a trend toward intensifi- 
cation rather than diminution of these 
conditions. The prediction of all the 
necessary features is based on ob- 
jective techniques, rooted in statistical 
and dynamical evaluations and modi- 
fied by the judgment of experienced 

Forecasts of severe storms and tor- 
nadoes one to six hours in advance 
are considered "watches." In view 
of the wide area covered by the 
forecast relative to the area likely to 
be affected, the public is encouraged 
by a "watch" merely to remain alert 
to further advisories. The forecasts 
are disseminated by teletype from 
the National Severe Storm Forecast 
Center in Kansas City, Missouri, to 
local offices around the country. Oc- 
casionally, a local National Weather 
Service office may issue a modified 
local forecast which takes special 
account of peculiar local conditions. 
Since subscribers to the teletype 
service include most elements of the 
communications media, storm indi- 
cations are quickly brought to the 
attention of the radio and TV public. 

Tornado Warning 

Severe storms are observed as they 
develop by Weather Service offices, 

local government authorities, and 
private persons. When the Weather 
Service, through its own action or a 
report by a private observer, becomes 
aware that a severe storm or tornado 
exists, a warning to communities in 
the extrapolated path of the storm is 
issued by teletype, or immediately by 
radio and television if the situation 
warrants. The public in the threat- 
ened communities may be warned 
by various actions of local authorities, 
including the sounding of sirens. The 
few minutes' warning thus provided 
is credited with a twofold reduction 
in loss of life. The greatest loss of 
life from a tornado is often to be 
found in the first community visited 
by a storm, downstream locations 
having the benefit of longer warning 

These days, observer reports are 
valuably augmented by radar ob- 
servations. The primary radar net- 
work of the National Weather Service 
has stations spaced 200 to 250 miles 
apart. When severe storms threaten, 
the radar screens are monitored con- 
tinuously. The more intense echoes 
are associated with heavier precipita- 
tion and a greater likelihood of hail, 
strong straight-line winds, and tor- 
nadoes. Severe tornadoes are often 
associated with a hook-shaped ap- 
pendage on the echo. Thus, the 
forecaster's observation of the intense 
radar echoes provides a continual 
check on visual sightings and damage 
reports, and provides for timely 
warnings to communities lying in the 
projected path of a storm. 

Tornado Research 

Observations — Accurate descrip- 
tion of tornado vortices and of the 
atmospheric conditions preceding and 
accompanying tornadoes is essential 
for improved understanding and pre- 
diction of tornadoes, and for the 
possible development of practical 
means for influencing tornadoes ben- 
eficially. But scientific observation 
of tornadoes is made difficult because 
of their random occurrence, brief 

duration, small size, and great vi- 

In an attempt to study tornado 
vortices directly, the National Severe 
Storms Laboratory has maintained a 
network of 30 to 60 conventionally 
equipped surface stations during the 
past seven spring seasons in an area 
where tornadoes are relatively fre- 
quent. Only two of the stations, 
however, have been directly affected 
by the winds of a tornado vortex 
during this period. The network den- 
sity would have to be increased by 
a factor of 100 to obtain detailed 
data on the wind distribution in tor- 
nado vortices. For detailed informa- 
tion on the vortices, therefore, we 
are forced to rely on chance observa- 
tions, engineering analysis of dam- 
aged areas, eyewitness accounts, and 
on the results of efforts to obtain 
data remotely by photography and 
by indirect probes such as radar. 

Our information indicates that the 
tornado is characterized by an inner 
region where the winds decrease to- 
ward the center, as in solid rotation, 
and an outer region where the winds 
fall off with increasing distance. 
Many other tornado features are 
highly variable. The tornado cloud, 
presumed to be the surface of con- 
stant reduced pressure at which the 
well-mixed subcloud air is cooled to 
saturation, varies in size and shape. 
In some photographs it appears as un- 
commonly smooth, suggesting lami- 
nar flow, in others as highly irregular, 
suggesting strong turbulence. Such 
differences are quite important from 
the point of view of tornado dynam- 
ics. Since the less fierce waterspouts 
are usually cylindrical and smooth- 
walled, we are led to search for sig- 
nificant variability in surface rough- 
ness or atmospheric conditions over 
land to account from the apparent 
variability of turbulence and shape 
of tornadoes. 

The electrical properties of the tor- 
nadoes also appear highly variable. 
Finley's report on 600 tornadoes, pub- 
lished in 1882, lists the observation 
of thunder and lightning in 425 asso- 



ciated rainstorms. In 17 cases, lumi- 
nosity of an apparently electrical 
origin was noted in the tornado fun- 
nel itself, while in 49 cases the ab- 
sence of any electrical indication in 
the cloud was specifically reported. 
More recently, interest in electrical 
theories was stimulated when Jones 
reported unusual 100-kHz radiation 
from a tornadic storm. Vonnegut 
presented an electrical theory of tor- 
nadoes; Brook has reported on the 
magnetic anomaly observed during 
touchdown of a tornado near Tulsa; 
and Weller and Waite have proposed 
that tornadoes are associated with 
intense electromagnetic radiation at 
television frequencies. On the other 
hand, Gunn measured the electrical 
activity of the tornadic storm that 
devastated Udall, Kansas, on May 25, 
1955, and found it to be "more or 
less typical of exceptionally active 
storms." Rossow has measured mag- 
netic fields over numerous water- 
spouts and found little disturbance. 
Kinzer and Morgan located the posi- 
tion of sferics sources in the tornadic 
storm in Oklahoma on June 10, 1967, 
and reported no obvious connection 
between areas of cloud lightning and 
tornado locations. 

In a sense, the tornado itself is only 
an important detail of the circulation 
and energy balance of the larger 
thunderstorm. By virtue of its larger 
size and greater frequency, the typical 
parent thunderstorm lends itself much 
more to detailed examination. There- 
fore, present research is concentrated 
on identifying details in atmospheric 
structure associated with formation 
of tornadic and non-tornadic storms, 
with the variable behavior of different 
storms that form in the same general 
area, and with the evaluation of the 
way forces manifested in the storm 
environment combine to produce ma- 
jor features of the in-storm motions. 
To this end, experimental networks 
of closely spaced surface and upper 
air stations are used along with quan- 
titative radar and specially instru- 
mented aircraft. 

We have learned that severe and 
enduring tornadoes form near the 

small low-pressure areas associated 
with the hook-shaped radar echo 
marked by the arrow in Figure V-7. 
Within the last decade the combina- 
tion of observations and data gath- 
ered by many sensors at one place 
has taught a great deal about major 
features of thunderstorm circulation 
and, indeed, has revealed important 
but hitherto unidentified distinct 
storm classes. 

Mathematical Modeling — All 
present-day mathematical models of 
weather represent extreme simplifica- 
tions of the natural phenomena. We 
are still especially far from simulat- 
ing realistically and in combination 
the many factors associated with the 
development of local storms. 

Most adequate for their purpose 
are the models of atmospheric be- 

havior on the scale oi 
culation and large weathe 
In use at the National Mcteoro 
Center in Washington, D. C, such 
models predict the general patterns 
of horizontal wind, moisture, and 
vertical currents; they provide useful 
guidance to the thunderstorm fore- 
caster, who combines their indications 
with his knowledge of the distribu- 
tion of features specifically associated 
with local storms — and with his 
judgment — to forecast the probable 
location of storms. Models that fore- 
cast directly the parameters known 
to be important to thunderstorm de- 
velopment are just beginning to come 
into operational use. Some incorpo- 
rate both dynamical and statistical 
methodology and provide somewhat 
more detailed spatial distributions 
over the United States than has been 
available heretofore. 


The picture is of a Plan Position Indicator (PPI) presentation of a severe storm over 
Oklahoma City on May 26, 1963. Range marks denote intervals of 20 nautical miles. 
North is toward the top. The radar is located at the center of the range circles. 
The arrow points out the location of the tornado. 



Local convective phenomena are 
significantly affected by a greater 
variety of processes and factors than 
widespread weather, and are corre- 
spondingly more difficult to model 
realistically. To date, we have some 
two-dimensional models that incor- 
porate simplified formulations of pre- 
cipitation-related processes and of 
entrainment. These show some skill 
in predicting, for example, the maxi- 
mum height to which a cloud tower 
rises with specified ambient condi- 
tions. The most comprehensive of 
today's models, however, is probably 
less detailed by a factor of at least 
100 than one that would illustrate 
significant features of the asymmetric 
horizontal and vertical structure. 

Today's mathematical models of 
the tornado itself treat cylindrically 
symmetric cases. At the edge of 
knowledge, we find steady-state mod- 
els such as Kuo's, which appears to 
describe essential features of observed 
tornadoes in terms of an unstable 
vertical stratification and an ambient 
field of rotation. The fact that these 
features are often present when tor- 
nadoes are absent, however, serves to 
emphasize that we still have very far 
to go in our modeling and observing 
to identify the factors responsible for 
concentrating angular momentum in 
the developing tornado. 

Experiments — The control of pa- 
rameters afforded by laboratory con- 
ditions recommends the experimental 
approach to identification and analy- 
sis of factors responsible for the 
growth of tornadoes. Such experi- 
ments have been conducted for many 
years, often in conjunction with theo- 
retical investigations, and realistic- 
appearing vortices have been pro- 
duced in various liquids and in air 
under a considerable variety of ex- 
perimental conditions. The very ease 
with which tornado-like vortices can 
be produced experimentally has made 
it difficult to progress much beyond 
theoretical implications regarding the 
development of swirling motion in 
converging fluid at the base of a ris- 

ing column, and the important influ- 
ence of boundaries. 

Concurrent with the recent devel- 
opment of numerical analysis of 
large-scale atmospheric circulations, 
however, has come appreciation of 
the importance of similarity both in 
theoretical and experimental model- 
ing. Similarity in flows on different 
scales is said to exist when the ratios 
of various quantities involving inertia, 
viscosity, rotation, and diffusion are 
the same. Considerations of similar- 
ity, and increased attention to such 
natural observations as are available, 
are leading to design of models more 
revealing of the effects of natural 

Thus, Turner and Lilly have con- 
structed physical models of vortices 
driven from above to simulate the 
convection in a cloud, and have found 
rising motion in the vortex core with 
descending motion in a surrounding 
annulus. Ward, noting that no tor- 
nado vortex can be indefinitely long, 
has ingeniously separated a fan from 
the vortex it creates in controlled in- 
flow beneath. In this model, his con- 
trol of the inflow angle and depth of 
the inflow layer represent the most 
important influences in the creation 
of a vortex, its intensity and diameter, 
and, in contrast to earlier models, the 
development of a central downdraft. 

The problems of developing theo- 
retical and experimental models in- 
dicate the importance of observations 
on even gross characteristics of tor- 
nado circulations. Is the flow upward 
or downward in the funnel core? 
How is tornado behavior, such as 
funnel-skipping, related to the rough- 
ness of underlying terrain? What is 
the wind inflow angle and air pres- 
sure at various distances from the 
visual funnel? How does the wind 
vary with height in the vicinity of 
tornadoes? If we could better answer 
these questions for atmospheric cases, 
we could design experiments accord- 
ingly, and rationally extend our 
search for influential parameters of 
the flow. 

Comments on Investigational 

We have surveyed observational, 
theoretical, and experimental aspects 
of tornado investigations. The vari- 
ety and complexity of processes im- 
plicit in tornado development and 
maintenance, and the rarity, relatively 
small scale, and intensity of the natu- 
ral phenomena have been sources of 
great difficulty. Let us briefly con- 
sider the helpful technological ad- 
vances that may reasonably be antici- 
pated and whose development should 
be encouraged. 

Emerging Observational Tech- 
niques — With regard to observa- 
tions, no available prototype tech- 
nique seems practical for measuring 
details of the distribution of velocity 
and other parameters in a tornado 
vortex. With the encouragement of 
severe-storm study programs, how- 
ever, greater numbers of observations 
— including useful motion pictures — 
should become available, and we may 
reasonably expect an opportunity in 
the next few years to extend the im- 
portant study of the Dallas tornado 
of April 2, 1957, made by Hoecker 
and his colleagues. 

Emphasis should be placed on ob- 
serving the circulations around severe 
storms, since it is certain that the 
intensity of a storm and the occur- 
rence of tornadoes is greatly con- 
trolled by the storm environment. In 
addition to encouraging existing pro- 
grams having this objective, we may 
put special emphasis on two emerg- 
ing tools. One is meteorological 
doppler radar, which in units of two 
or three can map the distribution of 
precipitation velocity with unprece- 
dented detail. The development of an 
improved doppler capability would 
have value both for fundamental re- 
search and for research on an im- 
proved warning system, the latter by 
providing bases for evaluating the 
distinguishing features in a storm 
velocity field characteristic of an im- 
pending tornado. Doppler capabil- 
ity for clearer tornado identification 



needs to be assessed. Although some 
meteorological doppler radars are 
presently in use and other systems 
are under development, the pace of 
work seems slow. 

The second emerging technique is 
satellite infrared spectrometry, which 
is providing new detail on the vertical 
thermal stratification of the atmos- 
phere at intervals of about 30 miles. 
Further development of the satellite 
system should result in better analy- 
sis of severe thunderstorm precursor 
conditions over the United States and 
refinement of our forecasting ability. 

Computers — With regard to 
mathematical modeling, greater real- 
ism will be possible as computers 
become larger and faster and as theo- 
retical models are revised in light of 
observations and experimental re- 
sults. Of course, many techni-socio- 
logical forces are already encouraging 
the development of improved com- 
puters. We may emphasize here that 
no conceivable computer can ever 
solve meteorological problems in such 
a way that careful scientists will 
not be an essential part of problem 
preparation; indeed, theoretical in- 
terpretation of data from observa- 
tional and experimental programs 
will be increasingly required to de- 
velop reasonably posed mathematical 

Physical Models — With regard to 
physical modeling of thunderstorms 
and tornadoes, the difficulties inher- 
ent in modeling significant atmos- 
pheric processes such as condensa- 
tion and precipitation, in diminishing 
the effect of container sidewalls to 
levels consistent with the atmos- 
phere's lack of sidewalls, and in 
simulating the vertical density gra- 
dient and diffusion processes charac- 
teristic of the atmosphere will con- 
tinue to represent serious obstacles. 
These problems have been less seri- 
ous with respect to interpretation of 
the more essentially two-dimensional 
flows representative of atmospheric 
circulations on larger scales. Never- 
theless, experimental methods should 

continue to be important for testing 
tornado hypotheses and suggesting 
new lines for observational and theo- 
retical study. 

The General Status of the 
Operational System for Severe 
Storm Prediction and Warning 

Present-day severe-storm forecasts 
are immensely valuable, but we wish 
they were more precise and more ac- 
curate. Although numerical methods 
have been used for forecasting large- 
scale weather patterns for over ten 
years, the development of mathemati- 
cal models relevant to the smaller 
scale of local storm complexes is still 
in its infancy. Basic improvements 
in the quality of severe-storm fore- 
casts depend on the development of 
new understanding of storm struc- 
ture and dynamics, the interaction 
between severe local storms, and 
the larger patterns of air motion 
that establish the general conditions 
favorable for storm development. As 
previously indicated, such improved 
understanding can be expected to 
evolve only as the insights provided 
by more detailed observations are 
assessed by careful scientists with the 
aid of more powerful computers. 
Eventually, methods will be devel- 
oped combining such detailed data as 
that provided by radar and satellites 
with other weather parameters in 
dynamical storm models; appropriate 
ways to use such detail in operational 
forecast preparation should then be- 
come clear. 

At present, we can strive to hasten 
the preparation and distribution of 
such forecasts as we have. To this 
end, hand analysis of patterns signifi- 
cant to local storm development is 
being significantly replaced by com- 
puter techniques. The radar network, 
which is the backbone of the system 
used for severe-storm warning, also 
lends itself to significantly advanced 
automation. Displays like that shown 
in Figure V-7 can be replaced by 
contour-mapped echo representa- 
tions. (See Figure V-8) A correspond- 

ing digital array can be pro< 
simultaneously (see Figure V 9) as a 
basis for automatic preparation and 
dissemination of extrapolation fore- 
casts. In midwestern United States, 
the Weather Service is presently 
starting to develop an operational 
test of advanced radar systems in 
order to evaluate the probable costs 
and benefits of various system de- 
signs for nationwide application. 

Prospects for a Measure of 
Tornado Control 

The energy production involved in 
one severe local storm is comparable 
to the total power-generating capacity 
of the United States. Thus, the 
control of severe-storm phenomena 
clearly requires an ability to direct 
far greater amounts of energy than 
those locally applied by man at pres- 
ent. This will depend on developing 
knowledge of how to modify the 
processes by which nature's supply is 
utilized. For example, silver iodide 
and a few other chemicals are used to 
stimulate the freezing of water drops 
that otherwise remain liquid during 
cooling to temperatures somewhat 
below their melting point; the arti- 
ficial release of the latent heat of 
fusion thus achieved can raise the air 
temperature enough to enhance sig- 
nificantly the growth of some clouds 
and to hasten the dissipation of 
others. Conceivably, this kind of 
process could be applied to alter na- 
ture's choice for rapid growth among 
a host of nearly identical clouds. 

Other means for modifying torna- 
does might involve alteration of the 
earth's topography and roughness to 
decrease the probability of tornadoes 
over inhabited areas, and the direct 
application of heat at a point in time 
and place where such application 
would beneficially modify the course 
of subsequent events. It must be 
plain from the foregoing discussion, 
however, that we are still very far 
from having a reasonable basis even 
for estimating the likelihood that such 
efforts could ever be successful. 




The figure shows the PPI-scope of the 10-centimeter WSR-57 radar at the National 
Severe Storms Laboratory in Norman, Oklahoma on April 26, 1968. Differences in 
shading indicate intervals of a factor of 10 in received echo power. From such an 
electronic display it is possible to determine the most dense part of a storm. Range 
marks are at intervals of 20 nautical miles. North is at the top of the figure. 




Imp saasa* .zzii« _ 

999983 22222 


250 9879971 
_iW 86 799 82 

2 34 



■ 1??334 4A32222 122 


- - 12356465321 


— 122J34S47Z644556J2 
2234 6r666t. 5S 4133 332 




2 64 

9a 66 B973 

lb * 

97789976 33 

2 7 2 


8e6 7 8B8 7 6 5 
97678877 75 
9777B8 7773 


99 977 87552 
967777o6 32 
9 566 77 »6 21 



2 344556 77 7665413332 22 
■.3566 466665 5443322221 
12 345c 66 77644433 2121 

22 3444455554333222^ 
1 2 22333 44 55 544 3 2, 2 1V 

11 11 12222333455554322 
— U4A 1 222223 33 * 55 432 21 - 



290 9577776622 
-2A2 9 5 7 6 77B7 21 

122 11222223455543221 
1221111122222 34 5 *4 3221 
222 2 44 3 3 22 


22 1 

.3 2 23 3 4 5 'i 22 i — 






944777 7..21 222 

97' i4 4 i i > S5> »44 3 — 


99455466 5556641 
99- 55 448 73 4 7 643 
99455554 335541 
9 9 4 7 6 7 6 5 4 35 5 3 8 — 

99777 4. H3 -. 33l 

32 4 

60 995 4*4 355 5 
990 6 5 5 > 5444 2 
99876555 5532 
99 98 7 *6664 

330 999855f763 221 

-3»2 986 7 66 7 6 2 222J-. 

134 9989977752 2222. 
-43* 999097 65 5 3 r- 

— H- 

i. 22 22 3443 21 
-222 344424 

2-25552 — 

2 2465 3 
2 3 4J 5 42 


2 3455552 
- 24464653 


— I- 

-234*4* 55**543 32 22 

2233344 44 3334*3 2222 
-222233-433332 34 3221 

338 9999976322 22 
-449 9 <9 99 8 53 

22 1 . 

3 44 


999999851 2222 


— 223344*43333333«.a21 

123 555555 555*32222211 

222 222i2333333333222. 
-.-2-1-1233333322 322 22 2 ^ 2 33 44 3322 222 2 1 
21111222222 12 


22222222 2221 

3 *6 OOOOBO OO , 


350 9999986852 432 
352 9999987h652 22 22 

354 99999997642222 2*4 
-356 99977998421 1 *4 

358 99967998 32 


99556985 22 

a 9958998722 122 

— 8 ~»9 » 9997 62 

10 99699753 
-12 999 8 77 66 1 121 



12 i2 2 3 33 22 2 1 
1222 12221 


-2-222 222211 12-24- 


14 11221 

-1 1 22 22 21 2211 11 


999865^5 1222 





-221 222 

.12222222 11 222i221 

. 22 1222 1222222222222221 

. 2222222222222233313222223222 
- -^. -22221- 1122*22223222222221 221 
. 222 11 

■.22222231322 11222 
112233333 332 33 J* 


11222 22222322222^2222222222 
^2222i222A2233 32 33222-221 


2111222222 2221 

The figure shows a digital version of the data shown in Figure V-8. The successive 
horizontal lines represent 2° steps of azimuth. These are noted in the leftmost 
column. The vertical lines represent 20-nautical-mile intervals, the dots at the top 
and bottom of the diagram represent one-nautical-mile intervals. Successive digits 
on the map represent factors of seven in the echo intensity. 



Tornadoes — Their Forecasting and Potential Modification 

A tornado, also called cyclone or 
twister, is defined as a violently rotat- 
ing column of air, pendant from a 
cumulonimbus cloud, and nearly al- 
ways observable as a funnel. The 
shape of a funnel varies from a cone 
to a rope; its lower end does not 
always touch the ground. A con- 
firmed small tornado could be char- 
acterized by a damage area of 10,000 
square feet, while the swath of a 
giant tornado covers more than 30 
square miles. Thus, a giant tornado 
could be 50,000 times larger than a 
tiny one in terms of potential damage 

The annual tornado frequency 
changed from a minimum of 64 in 
1919 to a maximum of 912 in 1967, 
which represents a ratio of 1:14. This 
does not mean that tornado frequency 
increased by at least one order of 
magnitude. Instead, reporting effi- 
ciency — related to the reporting sys- 
tem, urban development, population 
density, and such — probably in- 
creased the apparent tornado fre- 
quency. It is preferable, therefore, to 
evaluate the potential danger of tor- 
nadoes according to damage areas 
rather than their number of occur- 

Damaging Tornadoes 

When a tornado warning is issued, 
the general public will be looking for 
the nearest storm shelter for protec- 
tion of life. Statistics show, however, 
that 50 percent of the total tornado 
damage area is produced by only 4 
percent of the tornadoes. This means 
that half of the potential damage 
area can be warned efficiently if the 
top 4 percent of tornadoes are pre- 
dicted with great accuracy. If the top 
10 percent of tornadoes can be pre- 
dicted, their damage area would cover 
75 percent of the total damage area. 
Although these statistics do not sug- 
gest that only large tornadoes should 

be predicted to the neglect of others, 
accurate prediction of large tornadoes 
would be of great value to local 

Small Tornadoes — The origin of 
large, long-lasting tornadoes seems to 
be quite different from that of the 
tornadoes at the small end of the size 
spectrum. Small tornadoes and water- 
spouts are so similar in dimension and 
appearance that the former can be 
regarded as waterspouts traveling 
over land. These small storms, al- 
though they make up a large number 
of all storms, are very difficult to 
predict. They may form within a local 
shear line associated with growing 
cumulus clouds that may or may not 
become thunderstorms. Small torna- 
does last only a few minutes, leaving 
a damage swath of only a few miles. 

Hook-Echo Tornadoes — Large tor- 
nadoes frequently last 30 to 60 min- 
utes. Furthermore, in many cases 
several tornadoes of similar size and 
intensity appear one after another, 
thus forming a family of large torna- 
does. When radar pictures of proper 
gain and of low elevation angles are 
examined, almost all tornadoes in 
such a family are related to a thunder- 
storm echo with rotational character- 
istics — i.e., a rotating thunderstorm is 
a spawning place for one to several 
large tornadoes. 

When the view is unobstructed, a 
rotating thunderstorm can be photo- 
graphed at large distances as a bell- 
shaped cloud with an over-all diame- 
ter of 5 to 25 miles. The same cloud 
would appear in a plan-position- 
indicator (PPI) radarscope as a "hook 
echo," with an eye at the rotation 
center and several echo bands spiral- 
ing around the eye-wall circulation. 
Despite the fact that a family of 
tornadoes comes from a rotating 
thunderstorm, not every rotating 
thunderstorm or hook echo spawns a 
tornado during its lifetime. It is likely 

that only a maximum of 50 percent 
of hook echoes spawn tornadoes — 
usually large ones. Hook-echo torna- 
does are responsible for more than 
half of the damage areas caused by 
all tornadoes. 

Detecting Large Tornadoes — The 
above evidence leads to the conclu- 
sion that large tornadoes spawn from 
mesoscale vortex fields identified as 
rotating thunderstorms, hook echoes, 
or tornado cyclones. The outermost 
diameter of such a vortex ranges be- 
tween 5 to 25 miles. The eye, sur- 
rounded partially or totally by a hook- 
shaped echo, rotates at the rate of 
20 to 40 miles per hour at its outside 
edge and is 1 to 3 miles in diameter. 
The central pressure of a tornado- 
bearing mesoscale vortex or tornado 
cyclone is only 2 or 4 millibars lower 
than its far environment. An imprac- 
tically large and expensive network 
of barograph stations would be re- 
quired for detecting tornado cyclones. 
Unless a doppler radar network be- 
comes available in the future, PPI- 
scope pictures in iso-echo pres- 
entation with better than one-mile 
resolution will provide the only means 
of detecting tornado cyclones within 
some 10 minutes after their formation. 

Early detection of tornado cyclones 
is the key to a warning within a 
narrow zone in which there is a 
chance of tornado formation. Such 
an alley is only 5 miles wide and 50 
miles long on the average, while a 
tornado watch area extends 50 x 100 
miles, some 20 times larger than one 
alley area. 

Maximum Tornado Windspeed 

Windspeed is an important pa- 
rameter, necessary for the design of 
tornado protective structures. When 
settlers first experienced the impact 
of tornadoes in the Midwest, they 
estimated maximum windspeed to 


be in excess of 500 miles per hour. 
Some even estimated a supersonic 

Damage investigation since then 
has reduced general vvindspeed esti- 
mates to between 300 and 500 miles 
per hour. If these maximum-speed 
estimates are accurate, they would, 
where combined with the storm's 
pressure reduction, make it impossi- 
ble to construct tornado-proof struc- 
tures at reasonable cost. 

Fujita's study of tornadoes during 
the past ten years, however, has now 
led to the conclusion that the maxi- 
mum windspeed of tornadoes is much 
less than previously thought. Maxi- 
mum rotational windspeeds, as esti- 
mated from scaling motion pictures 
and characteristic ground marks, are 
about 200 miles per hour. The trans- 
lational motion of the storm must be 
added to the right side and sub- 
tracted from the left side of the rotat- 
ing core. If a tornado travels at its 
average speed of 40 miles per hour, 
the maximum combined speed above 
the frictional layer would be 240 
miles per hour. Some tornadoes, such 
as the ones on Palm Sunday, 1965, 
traveled eastward at 62.5 miles per 
hour. For these storms, the maximum 
combined windspeed would be 260 
miles per hour. Inside the boundary 
layer, the gust speed must be added 
to the mean flow speed, which de- 
creases toward the ground. Under 
the safe assumption that the peak 
gust speed could overpass the de- 
crease in the flowspeed toward the 
ground, a maximum gust speed of 
300 miles per hour seems to be quite 
reasonable. Thus, one has: 

Maximum rotational speed. . . .200 mph 

Maximum traveling speed 70 mph 

Maximum gust speed 300 mph 

It should be noted that the higher 
estimated speeds were obtained by 
assuming the cycloidal ground marks 
were produced by one rotating object. 
Fujita's study has indicated that there 
are 3 to 5 spots which produce cy- 

cloidal marks. Thus, the speed for 
any one tornado of a family must be 
reduced bv one-third to one-fifth. 

Minimum Pressure Inside 

As in the case of tornado wind- 
speed, in earlier days pressure reduc- 
tion at the center of tornadoes had 
been overestimated to be a near 
vacuum or 2,000 pounds per square 
foot. Since then, meteorologists have 
tended to agree that the pressure re- 
duction at the storm center is between 
200 and 400 millibars. 

It should be noted that a building 
will suffer also from differential pres- 
sure from its form resistance. A 300 
miles per hour wind will produce a 
positive stagnation pressure of about 
90 millibars at its windward side. 
Over the roof, however, the pressure 
may be negative, with the result that 
the roof is lifted. (The lifting force 
cannot be estimated unless the com- 
plete shape of the building is given 
and a wind-tunnel test is performed.) 

Potential Tornado Protection and 

As a result of more recent wind- 
speed and pressure estimates, criteria 
for designing tornado-resistant struc- 
tures have now become feasible. Such 
structures could be expensive, al- 
though future designs and improved 
material could reduce costs to a level 
where at least public buildings in a 
tornado alley could be built to with- 
stand tornado wind and pressure. 

Tornadoes vary in both shape and 
size. The most commonly observed 
four shapes are: 

Cone shape: Large tornadoes drop 
down in the shape of a cone; as 
the storm develops, the tip of 
the cone reaches the ground. 

Column shape: A tornado or a 
large waterspout takes the shape 
of a large trunk. 

Chopstick shape: Thi 

shape of weak tornadoes and 
waterspouts with small diame- 

Rope shape: When tornadoes be- 
come very weak, they change 
into a rope which often extends 
miles in a semi-horizontal direc- 

Although tornadoes have such 
different shapes, all tornadoes and 
waterspouts are characterized by a 
core circulation surrounded by a cir- 
cle of maximum wind. Outside this 
circle, the tangential windspeed de- 
creases in inverse proportion to the 
distance from the circulation center. 

Chopstick- or rope-shaped tunnels 
may be considered axially symmetric. 
When the core diameter increases, as 
in the case of the cone and trunk 
shapes, there are several spots of 
strong suction around the edge of the 
core; thus, they are no longer axially 
symmetric. These spots of strong 
suction rotate around the funnel at 
the speed of the funnel rotation. 

Three ways of modifying tornado 
windspeed may be considered. They 
are: (a) a reduction of the circulation 
energy; (b) an increase in the core 
diameter without changing the circu- 
lation intensity; and (c) reduction of 
the windspeed near the ground. 

Reducing the Circulation Energy 
— This possibility depends on the 
counteracting energy that can be cre- 
ated artificially. The total kinetic 
energy of a tornado is on the order 
of 10 7 kilocalories, which is just about 
1/1,000 of a small, 20-kiloton atomic 
bomb. The energy of even the largest 
of tornadoes is comparable only to 
1/100 of the energy in a small atomic 
bomb. Atomic bombs obviously can- 
not be used to modify a tornado. 
We might however, investigate such 
power sources as an artificial jet in 
order to learn more about how the 
relatively small and concentrated en- 
ergy of a tornado might somehow 
be dispersed. 



Increasing the Core Diameter — 
This definitely reduces the maximum 
tornado windspeed that occurs just 
outside the core. Modification of hur- 
ricanes through eye-wall seeding is 
based on the similar principle in 
which the release of latent heat 
around the eye wall will literally ex- 
pand the eye diameter, thus reducing 
the extreme pressure gradient around 
the eye. In the case of tornadoes, it 
might be possible to cool the lowest 
portion of the core circulation. If we 
inject water droplets into the core at 
a certain level between the ground 
and the cloud base, they will evapo- 
rate as they slowly centrifuge out, 
thus cooling the core to increase the 
descending motion inside the core. 
The lower portion of the core will 
then expand, reducing the maximum 

Contrary to older reports, a tor- 
nado cannot suck up a body of water 
beneath its core. Investigation of 

ground marks has revealed that the 
suction power of a tornado is weaker 
than a suction head of a household 
vacuum cleaner placed closed to the 
surface. It is, therefore, necessary to 
deliver a large amount of water in 
drop form into the core. 

Reducing Windspeed Near the 
Ground — This could be achieved by 
constructing a number of deflectors 
to the west and southwest of an im- 
portant structure such as an atomic 
power plant. The deflectors should 
be oriented in such manner that they 
change the southeast winds on the 
advancing side of a tornado to a 
northeast wind or possibly to a north- 
northeast wind, thus creating a flow 
converging toward the tornado cen- 
ter. The net effect of the convergence 
will be to reduce the speed near the 
surface. Design of deflectors should 
be made through aerodynamic calcu- 
lations and a wind-tunnel test. 

Other Activity 

Methods of estimating tornado 
windspeed should be explored and 
tested whenever feasible. Direct 
measurement is desirable if "maxi- 
mum wind indicators" are to be de- 
signed to stand against tornado wind. 
Measurement of object motion inside 
the tornado does not always give the 
air motion. Especially when an explo- 
sion of a structure is involved, the 
initial object velocity is likely to be 
overestimated. The designing of a 
low-priced "minimum-pressure indi- 
cator" for placement over the area of 
expected tornado paths is also recom- 

Basic research on tornado modifi- 
cation also needs to be carried on 
through various model experiments 
and theoretical studies. Furthermore, 
although the probability of tornadoes 
is small, some important structures 
must be protected against severe 

Tornado Forecasting and Warning 

Tornado frequency within the 
United States varies from 600 to 900 
per year, with the major concentra- 
tion through the Central Plains. 
Ninety percent of all tornadoes have 
a path-length between 0.5 and 50 
miles and path-width between 40 and 
800 yards. The median tornado has 
a path-length of 5 miles with a path- 
width of 200 yards. The median 
destructive period is less than 30 
minutes. Less is known about tor- 
nado velocity profiles, but one can 
estimate that 90 percent of the peak 
speeds are between 100 and 225 miles 
per hour, with a median peak velocity 
of 150 miles per hour. Unfortu- 
nately, the upper limit appears to be 
around 300 miles per hour. 

Thus, the problem is to forecast 
the occurrence of a rare meteorologi- 
cal event which has median dimen- 
sions of one square mile over a 
30-minute period, and to forecast it 

sufficiently far in advance to allow 
effective use of forecasts by all in- 
terested parties. There should be 
suitable differentiation for tornado 
classes based on width, length, and 
peak velocity. None of the above is 
possible at this time for areas of 
less than several thousand square 
miles and for more than one hour in 

Matters Contributing to the 
Forecast Problem 

Data Network — The average dis- 
tance between full-time surface re- 
porting stations is 100 miles. Reports 
are made every hour, oftener when 
special criteria are met. Unless the 
special report is taken and trans- 
mitted near a free time-period in the 
teletype schedule, it is quite probable 
that the report will be delayed 10 
minutes in reaching the user. Thus, 

the spacing and frequency of reports 
taken with the standard data network 
is not adequate to fully describe the 
severe weather events taking place 
within the confines of the data net- 

The average distance between 
upper air stations is 150 miles — and 
slightly more than that in the areas 
of high tornado incidence. Rawin- 
sonde releases are scheduled only 
every 12, and on occasion every 6, 
hours. But the 1200 Greenwich Mean 
Time (GMT) release is made in the 
Midwest at 6 a.m. Central Standard 
Time (CST), a minimum thunder- 
storm period, while the midnight 
GMT release is made at 6 p.m. CST, 
a maximum thunderstorm period. Ef- 
fectively, this produces only one use- 
ful report per day per station. These 
reports are not adequate to fully de- 
scribe the temperature, moisture, and 
wind patterns within the tropo- 



sphere. This is due partly to their 
spacing and frequency and partly to 
errors inherent in the equipment. 

In addition, there are data voids 
in the areas surrounding the United 
States, such as the Gulf of Mexico, 
the Atlantic waters adjacent to the 
east coast, and portions of Mexico 
and Canada. All of these contribute 
to serious lateral boundary prob- 
lems, the most pressing being the 
Gulf of Mexico. Texas, Louisiana, 
Mississippi, Alabama, Florida, and 
Georgia are all high-incidence areas 
for destructive tornadoes, and the 
lack of any direct meteorological data 
over the Gulf of Mexico has made 
objective analysis and prediction dif- 

To augment the conventional sur- 
face and upper air networks, use has 
been made of radar and satellite 
photographs. The processing and 
display of either method is still in 
its infancy; considerable experimenta- 
tion will be required to obtain con- 
tinuous readout of radar- and satel- 
lite-produced information. At present, 
neither the radar nor satellite output 
is woven into conventional analyses 
in a systematic and objective manner. 

Forecast Methods — Present meth- 
ods are largely subjective, drawing 
heavily on case studies and the ex- 
perience of the individual forecaster. 
This is slowly being replaced by 
objective, computer-oriented methods, 
partly dynamical and partly statisti- 
cal. (See Figure V-10) Considerable 
improvement is needed for either 
method. The most promising avenue 
for dynamical methods concerns the 
development of a fine-mesh primitive 
equation model for multi-layers. Such 
a model would be of limited value at 
this time because of the data limita- 
tions noted, but it will become in- 
creasingly important as the average 
spacing between stations is reduced. 
The statistical approach involves a 
search for predictors through the use 
of multiple-screening regression tech- 
niques. It has not been possible to 
gather all of the possible predictors 














12.5 N 

2125 2154 



23.5 N 

2131 2205 



12.4 S 

2131 2205 



18.0 N 

2133 2210 



21.3 N 

2140 2221 



25.2 N 

2152 2242 

The table illustrates an experimental severe weather warning of a thunderstorm 
cell moving from 256° at 23 knots. The warning gives the time of closest approach 
to airports near the forecast path. It also gives the distance and the direction of 
the echo from the airport. Finally, it estimates potential error of the forecast in 
terms of the time period of closest approach. This warning was prepared auto- 
matically by a computer using statistical properties of radar echoes such as those 
measured in Figures V-8 and V-9. 

along with tornado occurrences, so 
this approach will require further 

Research and Development — Com- 
paratively little research on forecasts 
is being performed in this country. 
In allied fields, considerable research 
and development is under way on 
hail suppression, doppler radar, 
LIDAR (light detection and ranging), 
and remote-sensing techniques. Im- 
proved equipment and techniques will 
have application to the warning prob- 


Several theories have been ad- 
vanced to explain the Great Plains 
tornado. These theories do not, how- 
ever, explain the hurricane-induced 
tornado, the western U.S. tornado, 
or the waterspout. A great deal more 
work is needed in modeling tornado 

Prediction Techniques 

The same problems apply to the 
warning as to the forecast. A vast 
majority of reported tornadoes do not 

come close enough to any of the 
reporting stations to be detected, ei- 
ther visually or by instruments. 

Radar Detection — The radar net- 
work is being expanded throughout 
the United States, using 10-centimeter 
radar. This is effective to 125 nautical 
miles in defining severe thunder- 
storms capable of producing torna- 
does, but even a highly skilled radar 
operator cannot clearly identify a 
tornado by radar or give a 15-minute 
forecast that a certain cloud will 
produce a tornado. Certain charac- 
teristic shapes provide some informa- 
tion on the probability of tornadoes, 
but the pattern is not present for 
every tornado. 

Instrument Detection — There are 
no mechanical methods at this writ- 
ing that can make an objective dis- 
tinction between the pressure fall or 
rise produced by a strong squall line 
and that produced by a tornado. 
Even if there were such a device, the 
spacing required to insure its useful- 
ness would be prohibitively expen- 

Volunteer Spotters — Most warn- 
ing is based on a combination of 



radar detection and visual spotting, warning is a function of the spacing is needed on "steering methods" for 

usually performed by volunteers. of the spotters. tornadoes once they are known to 

This gives uneven results at best, exist. No work at all has been done 

since the ability of the spotter is as To be of maximum value a warning to determine how long a tornado will 

much a function of his zeal as any- should be as specific as possible with be in contact with the ground once 

thing else. The timeliness of the regard to area and time. More work it has been detected. 


3. HAIL 

Hailstorm Research and Hail Suppression 

Hailstorms belong to those atmos- 
pheric phenomena whose life history 
originates and terminates in the 
mesoscale range — i.e., their size 
ranges from about 1 to 100 kilo- 
meters. Phenomena of this scale pre- 
sent great difficulties for observation 
and description, and the means and 
instrumentation for that purpose are 
only now being developed. 

Radar, the oldest tool of mesoscale 
observation, has been somewhat dis- 
appointing when quantitative data are 
required. A system that combines 
airborne radar with data derived from 
the aircraft's doppler navigation sys- 
tem has proved to be a powerful tool 
for storm studies. The radar helps to 
delineate the precipitation echo of 
the storm while the doppler system 
provides the wind vector at flight 
level. Thus, on circling the storm, 
the line integrals for divergence and 
vorticity can be solved, and these 
yield the inflow into the storm 
throughout its life history. 

The improved means of storm ob- 
servation have de-emphasized the 
classical approach to storm research. 
This approach attempts to find, 
through observation and deduction, 
one valid storm model that satisfies 
all hailstorms. The last such model 
was derived by Browning from radar 
observations of one storm in England. 
It was characterized by a slanted 
updraft and an echo-free vault — i.e., 
an area where the main updraft 
speed was concentrated and where, 
due to the high updraft speed, no 
large particles accumulated that 
would cause radar reflections. 

Hailstorm Characteristics 

Nowadays we know that hail- 
storms appear in many manifesta- 

tions. The energy source is always 
the latent energy of condensation, 
but in the exploitation of that energy 
the vertical wind profile appears to 
assume an important role. Over the 
Great Plains of the U.S., hailstorms 
usually travel from west to east. 
They can grow and form new cells 
from the leading (eastern) edge or 
from their trailing (western) edge; 
thus, they can actually grow from 
the rear. It appears that their updraft 
is usually upright and not slanted 
even under conditions of strong wind 
shear; more and more, they are re- 
garded as aerodynamic hindrances in 
the large-scale atmospheric flow re- 
gime, with the wind going around 
and over the storm. Thus, the up- 
draft tower may be eroded on the 
outside by the horizontal wind but 
remain undisturbed in the interior. 

The air intake into a growing cell 
is of the order of 10 cubic kilometers 
per minute. High wind velocity in 
the anvil level appears to be the 
mechanism that prevents early decay 
of the cell, since precipitation and 
liquid water are carried away from 
the cell and, consequently, do not fall 
back into and "suppress" the updraft. 
It has been shown that hailstorms 
occur with special frequency in jet- 
stream regions of the United States, 
Europe, and India and that the com- 
bination of convective storms and 
jet stream can produce a very efficient 
and abundant precipitating cloud sys- 
tem. There are indications that the 
effectively producing hailstorm is 
characterized by high latent instabil- 
ity, inflow from the right rear quad- 
rant, and strong wind shear aloft. 

Very poorly understood is the way 
hailstorms become organized. As yet, 
we do not know under what condi- 
tions many small storms or a few 

big ones form, what causes the storms 
sometimes to align themselves in 
rows and sometimes to form in clus- 
ters. It has been speculated that 
differences of surface temperature be- 
tween sunlight and shadowed areas 
may cause local seabreeze-type cir- 
culations which contribute to the 
organization of inflow areas. 

Some conditions lead to self- 
enhancement of storm intensity. For 
example, when the storm moves over 
its own precipitation area and en- 
trains moist air, the base level is 
lowered, which in turn increases the 
buoyancy. This will increase the in- 
flow into the storm, which then leads 
to an increased diameter of the up- 
draft column. This causes an increase 
of updraft speed for the same latent 
instability because the ratio between 
buoyancy forces and drag forces has 
shifted in favor of the buoyancy 

Theoretical Studies 

Theoretical studies of the dynamics 
of storms extend in two general di- 

Analytical Studies — These studies 
deal with the influences of buoyancy 
and water-loading on updraft speed 
and radial divergence when the 
buoyancy term is compensated by the 
weight of the cloud and precipitation 
water. Essentially, this research aims 
at appraising the existence of an 
"accumulation level" of cloud water 
in the upper regions of the storm. 

According to Soviet scientists, the 
accumulation level is characterized by 
a high liquid-water content, since the 
local derivative of the updraft speed 



versus height is negative 


above that level and positive below 
it. As long as the maximum updraft 
speed is greater than 10 meters per 
second, water drops will neither de- 
scend below the accumulation level 
nor ascend much above it. Therefore, 
liquid water may become trapped at 
a certain layer and provide conditions 
for the rapid growth of hailstones. 
While the existence of such a level 
is possible, the rapidly increasing 
water-loading will, for continuity rea- 
sons, cause a strongly divergent flow 
that discharges the accumulating wa- 
ter content radially in a short time. 

Numerical Studies — Several at- 
tempts are under way to expand one- 
or two-dimensional numerical cumu- 
lus-cloud models into convective 
storm models. Even two-dimensional 

models, however, are much too prim- 
itive for the simulation of a phe- 
nomenon as complex as a hailstorm. 
The best model to date appears to 
be a time-dependent, two-dimensional 
model developed by Orville; however, 
even this model puts severe strains 
on computer capacity and memory. 
There can be no question that these 
attempts are only first steps and that 
much research and data collection is 
required to make them realistic. 

Microphysical Studies 

Microphysical studies aim, partic- 
ularly, at an explanation of hailstone 
structure and the application of hail- 
stone features to explain the condi- 
tions under which it has grown. It 
is hoped that hailstones can be used 
as aerological sondes which even- 
tually may reveal their life history 

and, consequently, the environmental 
conditions inside the hail cloud. (See 
Figure V-ll) 

Here the investigator is confronted 
with complexities related to greatly 
varying growth conditions of ice due 
to accretion of supercooled water. 
The most thoroughly conceived the- 
ory has been developed by List from 
actual growth conditions in a hail 
wind tunnel. However, List gives 
consideration only to the accretion 
of supercooled cloud water; ice struc- 
tures resulting from the accretion of 
a mixed cloud (ice crystals and water 
droplets) or of aggregation of smaller 
hail or graupel have not been studied. 

The following general statements 
may be made with caution: 

Hailstone Structure — Most hail- 
stones show a hail embryo in their 


At the heart of almost every hailstone there is a distinct growth unit 5-10 millimeters 
in diameter known as the embryo. The illustration shows the three most common 
types: (1) Conical embryos consist of opaque crystals larger than 2 millimeters in 
diameter, indicating formation between -20C and C. These embryos fall in a 
stabilized position, blunt end downward, so they collect droplets on only one 
surface. This category represents about 60% of the hailstones studied. (2) Spherical 
embryos of clear ice (25% of the hailstones studied) consist of large crystals or a 
single crystal, indicating growth in clouds with temperatures above -20 C. Many 
of these embryos have cracks caused by the freezing of internal liquid water. (3) 
Spherical embryos of opaque ice (10% of the hailstones studied) have crystals of 
intermediate size and air bubbles showing no particular arrangement. They may 
have had a more complicated origin than other embryos, involving partial melting 
and refreezing or even collection of snow crystals. Because they tumble as they 
fall, they collect droplets equally on all surfaces. 



growth center. This embryo is conical 
or spheroidal. It can be opaque or 
clear ice. It is usually well recogniz- 
able against the shell structure of the 
remaining stone. 

One may conclude that the life 
history of a hailstone can be organi- 
cally subdivided into two major pe- 
riods: (a) growth in a hail embryo 
during the development cloud stage 
of the hail cell, and (b) growth in a 
hail shell during the mature-hail-cell 
cloud stage. It is conceivable that 
the former occurs during the de- 
velopment phase of the cumulonim- 
bus or hail cell, the latter when the 
penetrative convection has been es- 
tablished and a strong supporting 
updraft has formed. 

Environmental Growth Condi- 
tions — On the basis of List's theory 
it is possible to derive four environ- 
mental growth conditions from typi- 
cal hailstone properties: 

1. It is unlikely that hailstones 
are usually grown in the high 
water content of an accumula- 
tion level; if that were true, one 
should observe soft, spongy 
hailstones much more fre- 

2. It can be shown that hailstones 
with many alternating layers 
of clear and opaque ice may 
have grown at high levels in 
the cloud; at these levels, small 
altitude variations cause large 
variations of the growth con- 

3. Hailstone structures that are 
homogeneous over a large part 
of the shell indicate that they 
have grown in an updraft with 
continuously increasing updraft 

4. The natural hailstone concen- 
tration is of the order of 1 to 
10 per cubic meter. This con- 
centration effectively depletes 
the cloud water content, as was 
shown in 1960 by Iribarne 
and dePena, which gives hope 

that hailstones could be made 
smaller and less damaging 
through a slight artificial in- 
crease in the concentration of 
about two orders of magnitude. 
Amounts of seeding material 
needed to accomplish this are 

Hail-Suppression Experiments 

The problem of hail suppression 
is economic as well as scientific. One 
of the questions to be answered is: 
Does agriculture suffer sufficiently 
from hailstorms that prevention is 
necessary? Some people believe that, 
as long as we have a farm surplus 
and pay farmers for not planting 
certain crops, we do not need hail 
suppression. While this may be true 
now, in coming years we may need 
every bushel of farm crop for our 
food supply. This appears to be a 
good time, therefore, to begin a hail- 
suppression research program. Re- 
search must be emphasized, since 
too little is known about the hail 
mechanism to permit a realistic hail- 
suppression program to be conceived. 
Also, little is known about the rela- 
tive damage that is done by hail, 
water, and wind during a storm. 

The research phase need not be 
completed, however, before modifica- 
tion experiments can be thought of. 
On the contrary, the problem should 
be considered as a field program in 
experimental meteorology, where a 
well-conceived experiment with hail 
clouds is carried out with the poten- 
tial of observing a cause-and-effect 
relationship. Some hail clouds are 
more suited to such an experiment 
than others; for example, hail clouds 
growing from the rear edge should 
have a basically simpler structure 
than hail clouds that grow from the 
leading edge. Such clouds are also 
easier to observe, as they are not 
usually obscured by an overhanging 

The National Hail Research Ex- 
periment (NHRE) attempts to ac- 

complish exactly this ba 
research objectives and suppression 
operations — namely, to use aircraft, 
radar, and surface networks for a 
thorough study of the hailstorm 
simultaneously with a well-designed 
aircraft seeding program to which the 
storm's reaction is observable. The 
latter program cannot be conducted 
entirely without statistical control. 

Hail Suppression: Soviet Union 

Much information has been ob- 
tained from the operational hail- 
suppression experiments in the Soviet 
Union, specifically in the Caucasus. 
Several books have been published, 
and exchange visits between Soviet, 
American, and Canadian scientists 
have taken place, with many fruitful 
discussions, although it has not been 
possible to obtain a clear appraisal 
of the validity of the claims made 
by Soviet scientists. 

It appears that two major efforts 
are under way in the Soviet Union 
which differ basically in the means of 
delivering the seeding agent into the 
cloud. In one, guns and shells are 
used; in the other, rockets. While the 
guns have greater range and altitude 
and deliver 100 to 200 grams of the 
seeding agent (Agl or Pbln) by ex- 
plosion of the "warhead," the rockets 
can carry a larger amount of the 
agent and deliver by burning a pyro- 
technic mixture (3.2 kg). The rockets 
are somewhat more versatile in de- 
livery either on a ballistic curve 
through the storm or vertically inside 
the cloud when descending by para- 

One of four current projects in 
the Soviet Union is carried out 
through the Academy of Sciences of 
the Georgian S.S.R. in the Alazani 
Valley of the Caucasus, with Kart- 
sivadze as the chief scientist. Another 
is conducted by the Hydromete- 
orological Service in Moldavia by 
Gaivoronskii and others. The third, 
and largest, project seems to be 
conducted by the High Altitude In- 



stitute of the Hydrometeorological 
Service in Nalchick, under the di- 
rection of Sulakvelidze. This proj- 
ect consists of hail-suppression ex- 
peditions in the northern Caucasus, 
Azerbaidjan, and Armenia. The 
fourth is also in the Georgian S.S.R. 
and is under the direction of Lomi- 
nadze. Rockets are used in the first 
two projects; guns are used exclu- 
sively in the last two. The Ministry 
of Agriculture furnishes the hardware 
and crews for the field projects. 

Scientific Bases — All of these ef- 
forts are based on the validity of the 

N ( N s ) 

R S = R 

where Rs is the mean-volume hail- 
stone radius after seeding, 

Rn is the mean-volume hail- 
stone radius without seed- 

Nn is the hailstone concentra- 
tion without seeding, 

and Ns is the seeded hailstone con- 

A physical justification for the 
validity of this relationship was given 
by Iribarne and dePena and con- 
firmed more recently by List and 
Lozowski. The most important find- 
ing of this theoretical work is that 
the water content of a hail cloud 
becomes effectively depleted by a 
small number of hailstones, of the 
order of 10 per cubic meter, so that 
even modest artificial increases of 
their concentration by two orders of 
magnitude can be expected to de- 
crease their size sufficiently to prevent 
damage. It is this recognition that 
brings hail-suppression experiments 
into the realm of physical realization 
and economic benefit. 

All experiments in the Soviet 
Union seem to be designed in similar 
fashion: hail forecast, radar analysis, 
identification of the hail-spawning 

area in the cloud, and delivery of the 
seeding agent into the hail cloud. 
Forecasting skill has been developed 
to the degree that special experiments 
can be carried out to prevent the 
development of impending hail, while 
others are conducted to stop hail al- 
ready falling. 

Reported Results — Soviet scien- 
tists state that more than one million 
hectares (3,900 square miles) were 
protected in 1966. Hail damage in 
the protected area was 3 to 5 times 
smaller than in the unprotected area, 
which means that the cost of pro- 
tection amounts to barely 2 or 3 
percent of the value of the crops 
involved. For 1966, the total ex- 
penditure for protection was 980,000 
rubles, and the computed economic 
effect was a saving of 24 million 

Gaivoronskii and others have also 
reported on hail-suppression experi- 
ments in Moldavia, near the Bulgar- 
ian eastern border. These experi- 
ments utilize "Oblaka" rockets, a 
type that has a caliber of 125 milli- 
meters, weighs 33 kilograms, holds 
3,200 grams of PbL- as a pyrotechnic 
mixture, and delivers a total of 3 x 10 16 
nuclei at -10° centigrade. Maximum 
range and height are 12 and 9.5 
kilometers, respectively. The authors 
state that, in 1967, only 551 hectares 
out of 100,000 hectares of crop were 
damaged compared with 4,784 hec- 
tares in the control area. A similar 
effort with rockets is being carried 
out by Kartsivadze. 

Evaluation — It appears from the 
literature that the work in the Soviet 
Union is already past the research 
phase and well into the operational 
stage. As tests in the research phase 
were not randomized, however, a firm 
statistical significance has not been 
established. It is possible that the 
discovery by Changnon of the oc- 
currence of individual, short hail- 
streaks rather than long hailswaths 
may invalidate some of the conclu- 
sions made by the experimenters. 
Thus, a hailstreak may terminate by 
itself, rather than as a result of the 

seeding action, before reaching the 
boundary of the protected area, and 
since there are no means of knowing 
this beforehand such a case is counted 
as a positive seeding result. These 
conditions clearly point to the great 
complexity of designing a randomized 
experiment that would yield a unique 
result in a relatively short time. 

There can be little doubt that the 
basic approach of the Russian sci- 
entists, to treat each hailstorm as an 
individual case, is appealing; at the 
least, it eliminates the great uncer- 
tainty of the diffusional process from 
surface generators to the storm. 

Hail Suppression: Switzerland 

pression experiment was conducted in 
Switzerland from 1957 to 1963 in the 
Canton Ticino. The experimental 
area appears to have been larger than 
the canton, since generators and rain- 
gauges were distributed over roughly 
10,000 square kilometers, but the 
size of the area instrumented with 
24 surface Agl generators (type un- 
specified) was only a minor part of 
about 4,000 square kilometers, one- 
half of which were in Italy. 

After many years of careful 
freezing-nuclei measurements in 
and downwind from Agl generator 
sources it was concluded that, in 
order to be effective, seeding from 
the ground must be concentrated in 
the regions and at the moment in 
which storms form. It would appear, 
however, that the analysis should 
only be performed for the area coin- 
ciding with the generator network. 
Since this was not done, conclusions 
reached in the experiment — to the 
effect that "there is little doubt that 
seeding has been very effective in in- 
creasing the number of hail days" — 
seem to be not entirely valid. 

Hail Suppression: France 

French efforts in operational hail 
suppression are also continuing. Des- 



sens gives a 22.6 percent decrease of 
hail falls as an average over the eight- 
year period since the experiment be- 
gan. The French scientists are using 
surface Agl-acetone generators, of 
which 240 are distributed over 70,000 
square kilometers. The generators are 
lighted 6V2 hours before the expected 
outbreak of hailstorm activity in or- 
der to load the air sufficiently with 
good freezing nuclei, which may not 
normally be possible. 

The operations in Switzerland 
(GROSSVERSUCH III) can be re- 
lated to those in France in regard to 
the density of the generator network. 
The results for GROSSVERSUCH III 
show an increase of the number of 
days of hail (and an increase of rain 
amount per seeded day), while Des- 
sens reports a decrease in hail dam- 
age. Of course, "days of hail" and 
"hail damage" are two parameters 
that need not be directly proportional. 

Hail Suppression: Kenya 

Final results are available for the 
hail-suppression experiment carried 
out from 1963 to 1967 in Kericho, 
Kenya. It was based on the firing 
of Italian antihail rockets from 13 
firing positions within the Kitumbe 
Estate. In 1968 the rocket network 
was expanded to neighboring estates 
to a total of more than 30 stations. 
The rockets contain 800 grams of 
TNT and no Agl; their burst occurs 
at 2,000 to 2,400 meters above 
ground or at about the +2° cen- 
tigrade level. Rocket-firing begins 
when hail starts falling and continues 
until hail stops. In Kitumbe nearly 
5,000 rockets were fired during 60 
hail storms. 

Because of the consistency of the 
reduction of damage on Kitumbe dur- 
ing both periods, it seems unlikely 
that this was due to chance. (See 
Figure V-12) Five mechanisms have 
been suggested to explain why the 
experiment should work: (a) cavita- 
tion, (b) shock-induced freezing, (c) 
freezing due to adiabatic expansion, 

(d) introduction of ice nuclei, and 

(e) introduction of hygroscopic nuclei. 

Continuing Experimentation — Pre- 
liminary results have been obtained 
from continued experiments over tea 
estates in Kericho. Seeding was done 
at cloud base with pyrotechnic de- 
vices dispersing between 6 and 30 
grams of Agl per minute; 247 seed- 
ing flights were carried out on 225 
operational days. In the first season, 
58 hail reports from within the tea 
groves were obtained from 670 
seeded cells, against a historical 
background of 360 hail reports from 
686 nonseeded cells. Damage per hail 
instance was 2,929 pounds with seed- 
ing and 7,130 pounds without seed- 
ing. The great frequency of storms 
seems to make this area an excellent 
natural laboratory. 

Hail Suppression: Italy 

The effort in Italy proceeds along 
two avenues. The first approach is 
scientific in character and entails a 
study of the hail phenomenon rather 
than of hail prevention. The project 
is carried out by the Institute for 
Atmospheric Physics of the National 
Research Council. The second ap- 
proach has been developed by farmer 
associations and the Ministry of 
Agriculture and Forests. The largest 
effort is that of exploding rockets 
inside the clouds when the hailstorm 
is overhead. The rockets carry 800 
grams of TNT to altitudes of 1,000, 

1,500, or 2,000 meters. In 1968, 
96,000 of these rockets were fired 
in Italy. Plans are being made 
through the National Bureau of Elec- 
trical Energy for a project employing 
ground-based silver iodide burners of 
the type used by Dessens in France. 

Hail Suppression: United States 

In the United States, plans for a 
National Hail Suppression Field Test 
proceed slowly, while theoretical and 
applied research on the structure of 
hailstorms and the hailstone mech- 
anism progresses more rapidly. Proj- 
ect HAILSWATH, a loosely coordi- 
nated field experiment, was organized 
in the summer of 1966 in Rapid City, 
South Dakota. Twenty-three institu- 
tions participated in this endeavor, 
whose outstanding purpose was to 
explore the feasibility of a large joint 
operation involving, at times, as 
many as 12 aircraft. Hailstorms were 
seeded with dry ice and silver iodide 
according to a target-control area 
approach on 10 experimental days, 
but the results lack statistical sig- 

A review of various hail-suppres- 
sion projects in the United States 
makes it apparent that American 
hail-suppression activities can hardly 
be called successful. 


July 63 

Sept 65 





Aug 65 

Sept 67 

All estates 











rocket firing 





The table shows the decrease in the average loss per hailstorm in kilograms per 
hectare at Kitumbe estate compared with other estates in the nearby area. 



Current Status of Hail Prevention 

Hail losses in the United States, 
including damage to property and 
agricultural crops, have been esti- 
mated at $200 million to $300 million 
annually. While damage from hail- 
storms can occur in nearly every 
state, major hail losses are concen- 
trated in a belt extending from west- 
ern Texas through the High Plains 
into Alberta, Canada. 

Most property owners respond to 
the hail risk by buying insurance, 
since damages by hail are typically 
covered in a homeowner's compre- 
hensive policy. However, insurance 
coverage is less satisfactory for agri- 
cultural crops, because of the high 
premiums required in regions of high 
hail hazard. Crop hail insurance pre- 
miums in the Great Plains can range 
up to 22 percent for a standard policy. 

During a period of crop surpluses, 
it may be debatable whether crop 
losses from hail justify any substan- 
tial research effort. However, from 
the point of view of the effects of 
hailstorms on society, and consider- 
ing the trauma of a hailstorm loss 
and the fact that destruction of prop- 
erty by hail is a net economic loss, 
investigation of artificial hail preven- 
tion deserves attention. 

In regions of high hail hazard, it 
appears likely that an ability to re- 
duce hail damage by as little as 5 or 
10 percent would provide a net eco- 
nomic benefit. It is anticipated that 
hail reduction of 50 to 75 percent 
should be possible, with a resulting 
higher net economic benefit. 

Data Base: Large-Scale Field 

Attempts to prevent hail by cloud 
seeding were initiated shortly after 
the early experiments of Schaefer and 
Langmuir in the late 1940's. The 
projects were based mostly on the 
concept of reducing hailstone size 

through increases in the number of 
hailstone embryos. Silver iodide was 
the most common seeding agent and 
was frequently released from net- 
works of generators on the ground. 
The early projects in this country 
suffered from numerous handicaps, 
including a lack of knowledge of 
cloud processes and of resources for 
any significant evaluation studies. 

The early hail-suppression projects 
in the United States were conducted 
for commercial sponsors and em- 
ployed little or no statistical design. 
Some randomized experiments using 
ground-based generators were car- 
ried out in Argentina, Switzerland, 
and Germany. They yielded evidence 
that silver iodide could affect hail- 
storms, but that the effect could be 
unfavorable as well as favorable. 

Throughout the 1960's, under- 
standing of hail-formation processes 
was advanced through a number of 
extensive observational programs of 
hailstorms in the United States and 
abroad. The work carried out in 
the Soviet Union during this period 
is especially noteworthy, but observa- 
tional programs carried out in north- 
east Colorado also deserve mention. 

Improved understanding of hail 
growth processes led to more sophis- 
ticated systems for treatment. Seed- 
ing was increasingly carried out from 
aircraft and represented attempts to 
influence specific parts of a hail- 
bearing cloud rather than attempts 
to increase ice-nucleus concentrations 
throughout large volumes. This lo- 
calization of the seeding treatment 
reached its apex in the development 
in the Soviet Union of a system to 
introduce seeding agents into special 
regions within a cloud by means of 
artillery shells. 

There is increasing evidence that 
the seeding treatment used through- 
out the 1960's has been effective in 

eliminating hail from certain storms 
and reducing hail damage in other 
instances. Review of the evidence 
from a number of hail-prevention 
projects leads to the conclusion that 
the projects were successful in some 
instances. More recent results indi- 
cate substantial success in hail pre- 
vention in the United States, East 
Africa, France, and the Soviet Union. 
Indeed, a leading Soviet scientist is 
quoted as saying that "the problem 
of hail control is successfully solved." 

Mathematical Modeling 

During the past five years, sub- 
stantial advances have occurred in 
mathematical models of cumulus 
clouds. An ability to create realistic 
mathematical models of hailstorms 
would provide the basis for a better 
understanding of hail-formation proc- 
esses and mechanisms for hail pre- 

Initial cloud-modeling attempts 
utilized relatively simple one-dimen- 
sional steady-state models. These 
simple models were helpful as fore- 
runners of more complex models 
which now simulate realistically the 
life history of a large rain shower. 

In addition to modeling the dy- 
namics and life history of the large 
cumulonimbus clouds, greater atten- 
tion has been given to the mathemati- 
cal simulation of individual hailstone 
growth. Early efforts at development 
of a mathematical formulation of 
hailstone growth are being continued. 
More recent work has given greater 
insight into the hailstone growth 
process, and shows that the primary 
region of hailstone growth appears to 
be in the higher and colder parts 
the hail-bearing clouds. (See Figure 
V-13) This information, derived 
from the mathematical analysis, is 
consistent with field observations. It 
is of particular importance since it 



implies a basis for success in hail 
prevention by cloud seeding through 
the mechanism of drying out the re- 
gion of the cloud in which hailstones 

Although unresolved problems re- 
main concerning the position of hail 
growth with respect to the updraft 
maximum and the liquid-water con- 
centrations in hail-growth regions, 
a picture is beginning to emerge of 
a physically reasonable system for 
hail growth and hail prevention that 
is consistent with observations ob- 
tained from field projects. 

Prevailing Scientific Controversy 

There is no general agreement on 
the effectiveness of hail-prevention 
techniques. Skepticism concerning the 
claims of success in the Soviet Union 
and concerning the reality of ap- 
parent reductions in hail damage on 
hail-suppression projects in this coun- 
try loomed large in the development 
of current plans for hailstorm re- 
search in the United States. This is 
illustrated by the following extract 
from a planning document for the 
National Hail Research Experiment 

. . . This document i 
tirely concerned with a discussion 
of the need to complete success- 
fully a Hail Suppression Test Pro- 
gram, since it appears to us that 
a National Hail Modification Pro- 
gram is now premature. We must 
first determine if hailstorms can 
indeed be modified, and then learn 
if it is worth the effort. 

This point of view (that so little 
is known about hailstorms that the 
primary hail research effort should be 
so directed) is in conflict with the 
point of view that current knowledge 


Temp Height 
(°C) (kml 












The figure shows a single, mature convective storm of the midwestern U.S. which 
is apt to produce hailstones. A temperature and height scale are along the lefthand 
margin. Note the base of the cloud at 3.7 kilometers. The vertical wind speed 
profile is plotted over the cloud and indicates a maximum wind speed of 19 meters 
per second near the middle level of the cloud. If the maximum speed of the updraft 
exceeds the terminal velocity of the largest stable droplet, an accumulation zone 
of supercooled water forms because of the chain-reaction mechanism triggered by 
droplet breakup. The heavy line in the center section of the cloud is the 35-decibel 
contour as seen by radar. The accumulation zone is within this area. It is this area 
into which seeding material should be placed to be effective. 



provides a valid basis for initiating 
programs for application of current 
technology to hail prevention. 

Requirements for Scientific 

Instrumentation — Current hail-re- 
search plans call for a substantial 
effort to develop sophisticated instru- 
mentation to attempt to obtain the 
detailed life history of hail-bearing 
clouds. This is considered necessary 
to create a complete physical model 
of such storms. Development of the 
instrumentation for this task will 
require a major effort. The NHRE 
five-year program involves large ex- 
penditures for radars, specialized air- 
craft, and large numbers of field 

The instrumentation and equip- 
ment required for a more modest 
effort at suppressing hail in a pre- 
designated target area would be less. 
Such an approach could provide a 
means of testing various hail-suppres- 
sion techniques, would provide a 
basis for attaining knowledge to an- 
swer extant scientific questions, and 
would also partially satisfy the view 
that attempts should be made to 
apply current technology without fur- 
ther delay for scientific investigation, 
which should continue concurrently. 

Applied Technology — Develop- 
ment of hail-suppression technology 
involves not only basic research, as 
is being planned under the current 
NHRE effort, but also efforts to 
apply the technology. Needs for 
basic research on hail appear to be 
covered adequately in present plans 
for NHRE. However, efforts in the 
development and application of hail- 
suppression technology are badly 

An advantage of having several 
applications projects under way si- 
multaneously is that they can provide 
additional testing opportunities and 
opportunities for learning. An essen- 
tial requirement for optimum learning 
is to have a number of untreated 
cases, randomly selected, reserved as 
"control" cases. In several locations, 
local groups primarily concerned 
with applications and benefits from 
weather modification projects have 
agreed voluntarily to forgo treat- 
ment of a limited number of storm 
situations to provide such control 
cases. This willingness sets the stage 
for an opportunity for increased 

However, local groups that have 
organized to apply hail-suppression 
technology have sometimes expressed 
the opinion that the scientific com- 
munity is more interested in perpetual 

programs of research than it is in ap- 
plication. Such groups may be in- 
clined to proceed on their own with 
premature operational programs that 
involve not only improper techniques 
but also foreclose future opportuni- 
ties for associated research efforts. 
It is, therefore, rather urgent that 
steps be taken to develop mechanisms 
for cooperation with such local 
groups while the opportunity to re- 
serve some untreated control cases 
still exists. If local groups begin hail- 
suppression programs from which 
they believe benefits are being ob- 
tained, the opportunity for coopera- 
tion and continued learning will 
disappear, since pressures will exist 
for treatment of all cases. 

Approximate Time-Scale — If the 
present NHRE program begins its 
activities on schedule in 1972, it 
should produce useful inputs to hail- 
suppression technology within ap- 
proximately five years. In addition, 
if steps are taken to work with local 
groups, useful inputs to hail-suppres- 
sion technology can also be antici- 
pated within three to five years of 
the start of such programs. 

Considering the time-scale for both 
basic research and applications pro- 
grams, it should be possible to obtain 
adequate knowledge to carry out hail- 
reduction efforts economically and 
routinely by the end of this decade. 



Basic Processes of Lightning 

About 2,000 thunderstorms are in 
progress over the whole earth at any 
given time. These storms produce a 
total of about 1,000 cloud-to-ground 
and 500 intracloud lightning dis- 
charges each second. It follows that 
there are over S million lightning 
discharges each day to earth, and 
about 5 times as many discharges 
within the clouds. 

Lightning is essentially a long 
electric spark. (See Figure V-14) The 
total electrical power dissipated by 
worldwide cloud-to-ground lightning 
is roughly equal to the total annual 
power consumption of the United 
States, about 500 billion watts. On 
the other hand, the energy from a 
single lightning flash to ground is 

only sufficient to light a 60-watt bulb 
for a few months. It is the high 
worldwide rate of lightning flashing 
that provides the high power levels. 

The electrical energy that generates 
lightning is transformed to sound 
energy (thunder), electromagnetic 
energy (including light and radio 
waves), and heat during the discharge 
process. The radio waves emitted 
by the hundreds of lightning dis- 
charges per second provide a world- 
wide noise background. The level at 
which many communications systems 
can operate is limited by this back- 
ground noise level. The radio waves 
emitted by a single close (say, closer 
than one mile) lightning discharge 
can also cause malfunction of sensi- 

tive electronic systems (particularly 
solid-state systems) such as are used 
in modern guided missiles. 

The heat generated by the lightning 
channel sets forest fires, ignites flam- 
mable materials, and can be a cause 
of individual death. Of the over 8 
million discharges that hit the earth 
daily, very few cause damage. For 
example, most lightning to wooded 
areas does not cause forest fires. 
Still, there are about 10,000 forest 
fires a year in the United States at- 
tributable to lightning; and about 
2,000 rural structures, roughly half 
of which are barns, are destroyed 
by lightning-induced fires each year. 

Lightning strikes about 500 U.S. 
commercial airliners per year. Most 

Figure V-14 — LIGHTNING 

(1) This photograph shows a normal cloud-to-ground lightning flash near Mount 
San Salvatore. Lugano, Switzerland. Note how the streamers from the main lightning 
strokes branch downward. (2) In this photograph, a tall tower on Mount San Salva- 
tore has triggered a lightning flash. Note how the streamers branch upward, indicat- 
ing a reverse situation from the normal lightning flash. 



strikes produce little if any damage, 
the lightning being confined to the 
plane's metal skin. Sometimes, how- 
ever, potentially serious structural 
damage, such as the melting of large 
holes, does occur. There have been 
two cases of the total destruction of 
aircraft which the Federal Aviation 
Administration has attributed to igni- 
tion of the aircrafts' fuel by lightning. 
The most recent case was that of a 
Pan American Boeing 707, which 
exploded over Elkton, Maryland, in 
December 1963 after being hit several 
times by lightning. 

In addition to the radio waves and 
heating effect produced by lightning, 
the direct electrical effects of light- 
ning are often deleterious. They can, 
for example, result in the disruption 
of electrical power, as is often the 
case when lightning strikes a power- 
transmission line or a power station. 
Direct electrical effects can also result 
in malfunction or destruction of criti- 
cal electronic equipment in aircraft 
and missiles. A spectacular example 
of the foregoing was the lightning- 
induced malfunction of the primary 
guidance system of the Apollo 12 
moon vehicle. Further, individual 
deaths from lightning, about 200 per 
year in the United States, are pri- 
marily due to electrocution. 

Control of Lightning 

What can we do to control light- 
ning? Are there possible harmful 
consequences of such control? Let us 
look at the second question first and 
attempt to answer it by two examples. 
Suppose technology were advanced 
enough that we could stop lightning 
from occurring. What would the 
result be to forests and the at- 

1. If there were no lightning, 
would the incidence and de- 
structiveness of forest fires de- 
crease? In many cases, forest 
fires would be less common, but 
those that did occur would be 
more destructive. Lightning- 

induced forest fires and the 
forests have lived together in 
some sort of equilibrium for a 
a long time. (The oldest ar- 
cheological evidence of light- 
ning is dated at 500 million 
years ago.) There is now some 
evidence to indicate that fre- 
quent forest fires will keep a 
forest floor clean so that the 
fires that do occur are small 
and will not burn the trees. 
Further, in some cases, rela- 
tively clean forest floors may 
be necessary for the germina- 
tion of new trees. For example, 
Sequoia seedlings can germi- 
nate in ashes but are suppressed 
under a thick layer of needles 
such as would cover an un- 
hurried forest floor. Thus, it 
is not obvious that blind control 
of forest fires is desirable. 

2. If the frequency of lightning 
were diminished, would there 
be an effect on the atmosphere? 
Nobody knows. Lightning cur- 
rents and other electrical cur- 
rents flowing in the atmosphere 
during thunderstorms deliver 
an electrical charge to the earth. 
An approximately equal charge 
(a balancing charge) is thought 
to be carried from the earth to 
the ionosphere in areas of fair 
weather by the ambient fair- 
weather electric field between 
the earth and the ionosphere. 
Changing the lightning fre- 
quency might upset this charge- 
transfer balance with a result- 
ant effect on the fair-weather 
field. The change in the fair- 
weather field might trigger 
further reactions. 

The study of the effects of light- 
ning on the environment is in its 
infancy. The control of lightning is 
not necessarily desirable unless the 
full consequences of that control are 

Now, let us look at lightning con- 
trol. When "control" is mentioned 
it is reasonable to think either of (a) 

stopping lightning or (b) harnessing 
its power. To harness appreciable 
power from lightning would require 
a worldwide network which could 
tap energy from a reasonable fraction 
of the world's total discharges. Even 
if science were to devise an efficient 
way to tap energy from a lightning 
stroke (which it has not yet done), 
the construction and maintenance of 
some sort of worldwide network ap- 
pears at present to be impractical. 
On the other hand, stopping lightning 
from a given storm, or at least de- 
creasing its frequency, is certainly a 
practical goal, and some initial steps 
in this direction have been taken. 
For example, it has been experi- 
mentally demonstrated, although not 
to the satisfaction of everyone con- 
cerned, that cloud seeding can some- 
times decrease the number of light- 
nings produced by a thundercloud. 

Understanding of Lightning 

A number of photographic, elec- 
trical, spectroscopic, and acoustic 
measurements have been made on 
lightning. From these we have a 
reasonably good idea of the energies, 
currents, and charges involved in 
lightning, of the electromagnetic 
fields (radio waves, light, and so on) 
generated, of the velocities of propa- 
gation of the various luminous 
"streamer" processes by which the 
lightning discharge forms, and of 
the temperature, pressure, and types 
of particles comprising the discharge 
channel. In short, we have available 
both an observational description of 
how lightning works (e.g., the dis- 
charge is begun by a luminous leader 
which is first seen at the cloud base 
and moves toward ground in steps, 
as shown in Figure V-15) and most 
of the data needed for routine engi- 
neering applications (e.g., power-line 
design and lightning protection). 

A good deal of what we know 
about lightning has been determined 
in the United States in the past fifteen 
years. However, the total number of 
U.S. researchers primarily studying 




WA WW//////////////,. 
(Illustration Redrawn with Permission. BEK Technical Publications, Inc. Carnegie. Pa) 

The drawing shows the initiation of a stepped-leader from a cloud base. The time 
involved is about 50 millionths of a second. As the downward-moving leader gets 
close to the ground, upward-moving discharges meet it. A return stroke then propa- 
gates from the ground to the cloud. The time for the return stroke propagation is 
about 100 millionths of a second. Propagation is continuous until the charges are 

lightning at any given time during 
this period has been only about ten, 
of which perhaps half have contrib- 
uted to our understanding of light- 
ning. As an example of the general 
lack of scientific interest in lightning 
phenomena, the first technical book 
on lightning was not published until 

While we have available a number 
of observational "facts" about light- 
ning, we do not understand lightning 
in detail. Areas of particular igno- 
rance are: (a) the initiation of light- 
ning in the cloud and (b) propaga- 
tion of lightning from cloud to 
ground. Unfortunately, these are just 
the areas in which a detailed under- 
standing is essential if lightning con- 
trol is to be practiced. 

It is important to know what we 
mean by a "detailed understanding." 
A "detailed understanding" implies a 
mathematical description or model of 
the lightning behavior. The mathe- 
matical model is adequate when it can 
predict the observed properties of 
lightning. The mathematical model 
can then be used to determine the 
effects on the lightning of altering 
various parameters of the model. 
For the case of lightning initiation, 
these parameters might be the am- 
bient temperature, ambient electric 
field, number of water drops per 
unit volume, etc. The predictions of 

the mathematical model must be 
tested by experiments. The results 
of these experiments can suggest 
changes in the model or can verify 
its validity. It follows that experi- 
ment and theory must advance to- 
gether to achieve a complete descrip- 
tion of the lightning phenomenon. 

The physics of lightning initiation 
and propagation is exceedingly com- 
plex. Some idea of its complexity can 
be gauged by noting that the proc- 
esses involved in electrical breakdown 
between a rod and a flat plate in the 
laboratory (an electric spark) are at 
present only vaguely understood. It 
appears that, despite about thirty 
years of experimental work, a real 
understanding of the laboratory spark 
will not be available until a mathe- 
matical description of the spark is 
forthcoming. Only recently have 
digital computers become available in 
a sufficient size that a mathematical 
solution to the spark problem is in 
principle possible. 

The Future 

Significant progress in our detailed 
understanding of lightning could 
probably be made in the next ten to 
fifteen years, although given the pres- 
ent level of scientific activity and 
ability in the lightning area, it is 
unlikely that this will be the case. 

Lightning research has been neither 
glamorous enough nor quantitative 
enough to attract the attention of 
many good graduate students or 
senior scientists. Several excellent ex- 
perimentalists are presently working 
in the lightning area, and their work 
needs to be continued and enlarged. 
More important to the goal of de- 
tailed understanding of lightning, 
however, is the need for mathemati- 
cally oriented scientists to become 
involved in the problems of lightning 
initiation and propagation. The 
mathematically oriented scientists and 
the experimentalists should work 
closely together in both the construc- 
tion of suitable mathematical models 
and in the planning and analysis of 

In studying lightning, the time- 
scale on which meaningful results 
can be expected is relatively long. 
From an experimental point of view, 
the necessity of staying in a given 
location for a long enough time to 
observe enough lightning to be able 
to compile statistically significant re- 
sults determines the time-scale of any 
particular lightning research pro- 
gram — generally, several years. The 
mathematical approach to lightning is 
exceedingly complex and thus must 
also take place on a time-scale of 
several years. With a coordinated 
work force of perhaps five senior 



theoreticians and fifteen senior ex- 
perimentalists (assuming, of course, 
that these researchers are equipped 
with the necessary skills), one might 

expect significant progress in our 
detailed understanding of lightning 
in the next ten to fifteen years. There 
is certainly no assurance of success 

in any lightning research. It is clear, 
however, that a successful effort to 
understand lightning must be a long- 
term effort. 

Reduction of Lightning Damage by Cloud Seeding 

Lightning is an important cause of 
forest fires throughout the world and 
especially in North America. In an 
average year, about 10,000 forest fires 
are ignited by lightning; in a severe 
season, the number may rise to 
15,000. The problem is particularly 
acute in the western states, where 
lightning ignites over 70 percent of 
the forest fires. Here, hundreds of 
fires may be ignited in a single day, 
many of them in remote and inac- 
cessible regions. These peaks in oc- 
currence, along with existing heavy 
fire loads, tax fire-suppression agen- 
cies beyond reasonable limits of man- 
power and equipment. Fire-suppres- 
sion costs can be very high; direct 
costs may approach $100 million per 
year while losses of commercial tim- 
ber, watersheds, and other forest 
resources may be several times this 
amount. In addition to loss of human 
lives, lightning fires constitute a 
growing threat to homes, businesses, 
and recreational areas. 

Potential Modification Techniques 

What steps could be taken in 
weather modification to alleviate the 
lightning-fire problem? The most ob- 
vious is to reduce the number of 
cloud-to-ground discharges, particu- 
larly during periods of high fire 
danger. Those characteristics of dis- 
charges most likely to cause forest- 
fire ignition might be selectively mod- 
ified to decrease their fire-starting 
potential. Also, the amount of rain 
preceding or accompanying lightning 
could be increased in order to wet 
forest fuels and thus decrease the 
potential for fire ignition and spread. 

A Seeding Experiment — The large 
losses in natural resources each year 

caused by lightning-ignited forest 
fires has prompted the Forest Service 
of the U.S. Department of Agricul- 
ture to perform a series of experi- 
ments in the northern Rocky Moun- 
tains which are aimed at reducing 
fire-starting lightning strokes by 
massively seeding "dry" thunder- 
storms over the national forests. 
Following is a summary of results of 
the studies of lightning-fire ignition 
and lightning modification. 

The first systematic program of 
lightning modification was conducted 
in western Montana in the summers 
of 1060 and 1961. This two-year 
pilot experiment was designed to test 
the effect of seeding on lightning 
frequency and to evaluate lightning- 
counting and cloud-seeding methods 
in mountainous areas. Some 38 per- 
cent fewer ground discharges were 
recorded on seed days than on days 
when clouds were not seeded. Intra- 
cloud and total lightning were less 
by 8 and 21 percent, respectively, 
on seed days during the two-year 
period. Analysis of these data by a 
statistical test showed that, if seeding 
had no effect, differences of this 
magnitude would occur about one 
in four. Also, the experiment con- 
firmed the need to develop a contin- 
uous lightning-recording system that 
could resolve the small-scale details 
of individual lightning discharges. 
Subsequently, a continuous lightning- 
recording system and improved cloud- 
seeding generators were developed. 

Building a Data Base 

A new lightning-modification ex- 
periment was begun in 1965, with 
the first phase to last for three sum- 

mer seasons. The objectives were 
to gain additional information on 
the frequency and characteristics of 
lightning from mountain thunder- 
storms and to determine if there is 
a significant difference in the occur- 
rence and character of lightning from 
seeded and unseeded storms. It was 
not designed to confirm or reject a 
single mechanism by which lightning 
is modified by seeding. Rather, a 
primary objective was to build a 
body of observations of lightning 
from both seeded and unseeded 
storms and to use these data to build 
appropriate hypotheses and models 
for testing in future experiments. 
Appropriate statistical tests were in- 
cluded in the design of the experi- 
ments as a basis for evaluating dif- 
ferences attributable to treatment. 

Analysis of data on the basis of 
the life cycle of individual thunder- 
storms occurring in 1965-67 (14 no 
seed, 12 seeded storms) gave the 
following results at the given level of 
significance for two-tailed tests: 

1. Sixty-six percent fewer cloud- 
to-ground discharges, 50 per- 
cent fewer intracloud dis- 
charges, and 54 percent less 
total storm lightning occurred 
during seeded storms than dur- 
ing the unseeded storms. 

2. The maximum cloud-to-ground 
flash rate was less for seeded 
storms. Over a 5-minute inter- 
val, the maximum rate averaged 
8.8 for unseeded storms and 
5.0 for seeded storms; for 15- 
minute intervals, the maximum 
rate for unseeded storms aver- 
aged 17.7 as against 9.1 for 
seeded storms. 


3. There was no difference in the 
average number of return 
strokes per discrete discharge 
(4.1 unseeded vs. 4.0 seeded). 

The average duration of dis- 
crete discharges (period be- 
tween first and last return 
stroke) decreased from 235 mil- 
liseconds for unseeded storms 
to 182 milliseconds for seeded 

5. The average duration of con- 
tinuing current in hybrid dis- 

charges decreased from 187 
milliseconds for unseeded 
storms to 115 milliseconds for 
seeded storms. 


The results from the seeding ex- 
periments to date strongly suggest 
that lightning frequency and char- 
acteristics are modified by massive 
seeding with silver iodide freezing 
nuclei. While the physical mechanism 
by which massive seeding modifies 
lightning activity is not fully under- 
stood, there is evidence that the 

basic charging processes are altered 
by the seeding. Further, it has been 
established on the basis of direct 
measurements that hybrid discharges 
(lightning strokes that contain a con- 
tinuing current) may be responsible 
for most lightning-caused forest fires. 
Thus, a substantial reduction in the 
duration of the continuing-current 
portion of the hybrid discharge may 
have a large effect on the ability of 
an individual discharge to ignite fuels 
or to cause substantial damage. This 
change in the nature of the discharge 
may be more important than a change 
in the total amount of lightning that 
is produced by the storms. 







The Causes and Nature of Drought and its Prediction 

Drought is one of the manifesta- 
tions of the prevailing wind patterns 
(the general circulation). A few spe- 
cial remarks may clarify this mani- 
festation, and suggest further work 
necessary to understand and predict 

Virtually all large-scale droughts 
(like the Dust Bowl spells of the 
1930's or the 1962-66 New England 
drought) are associated with slow and 
prevailing subsiding motions of air 
masses emanating from continental 
source regions. Since the air usually 
starts out dry, and the relative hu- 
midity declines as the air descends, 
cloud formation is inhibited — or, if 
clouds are formed, they are soon 

The atmospheric circulations that 
lead to this subsidence are certain 
"centers of action," like the Bermuda 
High, which are linked to the plan- 
etary waves of the upper-level wester- 
lies. If these centers are displaced 
from their normal positions or are 
abnormally well developed, they of- 
ten introduce anomalously moist or 
dry air masses into certain regions 
of the temperate latitudes. More im- 
portant, these long waves interact 
with the cyclones along the polar 
front in such a way as to form and 
steer their course into or away from 
certain areas. In the areas relatively 
invulnerable to cyclones, the air de- 
scends, and if this process repeats 
time after time, a deficiency of rain- 
fall leading to drought may occur. 
In other areas where moist air is 
frequently forced to ascend, heavy 
rains occur. Therefore, drought in 
one area is usually associated with 
abundant precipitation elsewhere. 
For example, precipitation was heavy 
over the Central Plains during the 

1962-66 drought in northeastern 
United States. 

After drought has been established 
in an area, it seems to have a tend- 
ency to persist and expand into ad- 
jacent areas. Although little is known 
about the physical mechanisms in- 
volved in this expansion and per- 
sistence, some circumstantial evi- 
dence suggests that numerous 
"feedback" processes are set in mo- 
tion which aggravate the situation. 
Among these are large-scale inter- 
actions between ocean and atmos- 
phere in which variations in ocean- 
surface temperature are produced by 
abnormal wind systems, and these in 
turn encourage further development 
of the same type of abnormal circu- 
lation. Then again, if an area such 
as the Central Plains is subject to 
dryness and heat in spring, the 
parched soil appears to influence sub- 
sequent air circulation and rainfall 
in a drought-extending sense. 

Finally, it should be pointed out 
that some of the most extensive 
droughts, like those of the 1930's 
Dust Bowl era, require compatibly 
placed centers of action over both 
the Atlantic and Pacific oceans. 

In view of the immense scale and 
complexity of drought-producing sys- 
tems, it is difficult for man to devise 
methods of eliminating or ameliorat- 
ing them. However, given global 
data of the extent described previ- 
ously, and the teamwork of oceanog- 
raphers, meteorologists, and soil 
scientists, it should be possible to 
understand the interaction of con- 
tinent, ocean, and atmosphere suf- 
ficiently so that reasonably accurate 
estimates of the beginnings and end- 
ings of droughts are possible. 

Ability to predict droughts would 
be of tremendous planning value. 
Unfortunately, encouragement for 
drought research comes only after a 
period of dryness has about run its 
course, because the return of normal 
or abundant precipitation quickly 
changes priorities to more urgent 
matters. Without continuing in- 
depth drought studies, humanity will 
always be unprepared to cope with 
the economic dislocations induced by 
unpredictable long dry spells. 

It has long been known that the 
general circulation of the atmosphere 
is such that alternating latitude belts 
of wetness and dryness tend to domi- 
nate the world system of climates. 
(See Figure VI-1) In connection with 
droughts, the important belts are: 

1. The equatorial belt of wetness 
associated with ascending cur- 
rents in the zone where the 
trade winds from the southern 
and the northern hemisphere 

2. The subtropical belt of dryness 
associated with descending air 
motions in the so-called sub- 
tropical anticyclones; 

3. The mid-latitude belt of wetness 
associated with traveling de- 
pressions and storms that de- 
velop in the zone of transition 
between warm and cold air 
masses — i.e., the "polar front." 

While the equatorial belt of wet- 
ness is more or less continuous around 
the world, the subtropical belt of dry- 
ness is disrupted by monsoon-like 
winds in the warm seasons and by 
polar-front disturbances in the cold 
season. As a result, rainfall is gen- 
erally adequate along subtropical east 




I Over 80 inches annual mean rainfall 

I j\ I I -Mi 4I.I In SO in hi", 

[^] From 10 to 40 inches 

j Under 10 inches 

The map shows annual precipitation over the world compiled from land-station data 
and some ship and island observations. Isopleths over the ocean areas, which 
show large "dry" patches off western continental coastlines, are best guesses. 

coasts (e.g., Florida), while dryness 
typically prevails along subtropical 
west coasts (e.g., southern California) 
and in adjacent continents. Finally, 
the mid-latitude belt of wetness will 
be disrupted where mountain ranges 
(e.g., the Rocky Mountains) provide 
shelter against rain-bearing winds 
from nearby oceans. 

Between the semi-permanent cli- 
matic patterns, which do not change 
perceptibly, and the rather lively 
short-term patterns associated with 
traveling disturbances and storms, 
there exist regimes of long-lived 
anomalies superimposed on the gen- 

eral circulation. These anomalies are 
quasi-stationary or move very slowly, 
and their duration and intensity may 
vary within wide limits. Anomalies of 
this kind are always present, and 
when their duration and intensity ex- 
ceed certain limits of dryness, they 
become recognized as droughts. Most 
national weather services have estab- 
lished definitions of drought; al- 
though these are useful for record- 
keeping, administrative actions, and 
such, they do not reflect scientific 
principles. In the following, the word 
drought will be used in the meaning 
of an extensive period of excessive 

Research Findings 

There is some indication that cer- 
tain time-lag relationships exist. For 
example, Namias found that many 
summer droughts in the United States 
appear to be associated with changes 
in the upper atmosphere that begin 
to develop in the foregoing spring. 
There is a need here for more research 
to determine whether reliable two- 
way statistical relationships exist and 
are applicable to independent sets of 
data; if this should prove to be so, 
techniques for predicting the onset of 
individual droughts might be devel- 



The factors that determine the dur- 
ation of droughts have not been well 
explored and no predictive capability 
exists. The droughts that have re- 
ceived most attention are those that 
have affected agricultural operations 
— i.e., late spring and summer 
droughts. Some of these have been 
unable to survive the hardships of the 
winter following, but others have 
shown a tendency to recur the next 
spring or summer, and these pro- 
longed droughts are of great interest 
economically as well as scientifically. 

There is evidence to indicate that 
drought-producing systems tend to 
develop in families (rather than as in- 
dividuals), though each member may 
not qualify as a drought according 
to official definitions. For example, 
Namias found that drought-producing 
anticyclones over the agricultural 
heartland of North America have 
companion anticyclones on the Pacific 
as well as on the Atlantic. Drought- 
producing anticyclones in the lower 
atmosphere appear to be associated 
with distortions of the flow patterns 
through deep layers. Our knowledge 
of these conditions is meager; much 
firmer information could be provided 
through special analyses of existing 

Although an official drought may 
cover a relatively small region, the 
associated atmospheric processes must 
be studied in the context of the gen- 
eral circulation of the atmosphere, in- 
cluding the principal sources of heat 
and moisture. 

The Causes of Drought 

The above-mentioned findings — 
that drought-producing systems tend 
to occur in families and that individ- 
ual droughts may span one or more 
annual cycles — are of considerable 
scientific significance and hold out 
hope of progress toward prediction. 
These findings point toward the phy- 
sical processes that create the large- 
scale anomalies of which droughts are 

manifestations. Since extraterrestrial 
influences can safely be ruled out, it 
is clear that the forces, or energy 
sources, that bring about these anom- 
alies must develop within the earth- 
atmosphere system itself. Further- 
more, since an individual drought in 
middle and high latitudes (where the 
annual variation is large) may outlast 
an annual cycle, it is plausible that the 
underlying energy sources are rooted 
in the equatorial belt (where the an- 
nual change is small). 

Bjerknes has recently produced se- 
lected analyses that indicate, with a 
high degree of certainty, that the gen- 
eral circulations of the atmosphere in 
middle and high latitudes respond 
readily and significantly to energy in- 
puts resulting in variations in the 
ocean-atmosphere interactions in low 
latitudes. Of special importance is 
the transfer of heat and moisture from 
the oceans, and the freeing of latent 
heat by condensation in the air. The 
major site of interactions resulting in 
varying inputs of energy is the equa- 
torial belt from the west coast of 
South America to beyond the date 
line. Significant impulses can also be 
traced to the Humboldt Current, the 
Indian Ocean, and other areas. 

Bjerknes found that the upwellings 
of cool water, resulting from the vary- 
ing convergence of the trade winds, 
undergo changes that may be large at 
times, and these affect the rate at 
which energy is supplied to the atmos- 
phere in the equatorial belt. These in- 
puts are, in turn, exported via upper 
air currents as various forms of 
energy to the mid-latitude belt, where 
they bring about distortions of the 
flow patterns, dislocations of the 
storm tracks, and regional anomalies 
of different kinds. Of particular in- 
terest in connection with droughts is 
the tendency for more or less sta- 
tionary offshoots from the subtropical 
belt of dryness to disrupt the mid- 
latitude belt of wetness. Bjerknes' 
findings are of great interest and raise 
hopes for progress in long-range pre- 
diction and other applied areas. 

Research Aspects — It is clear from 
the foregoing discussion that our 
knowledge of drought is fragmentary 
and that much work remain 
done before adequate descriptions of 
individual or typical droughts can be 
provided. An individual drought must 
be recognized and described as a 
member of a family of anomalies, and 
its characteristics must be related to 
the evolution of these anomalies. Un- 
doubtedly, such descriptive studies 
will lead to greater insights into the 
underlying general mechanisms as 
well as the many local or regional 
factors that determine the severity of 
droughts. In the past, research on 
droughts has been conducted on an 
ad hoc basis, with emphasis on local 
or regional conditions. A concerted 
effort, making full use of available 
data and data-processing facilities, 
seems justified in terms of national re- 
quirements as well as available talent. 

Although the broad aspects of the 
causes of droughts appear to be un- 
derstandable on the basis of Bjerknes' 
findings, much work remains to be 
done to relate the evolution and the 
characteristics of atmospheric anom- 
alies to specific variations in the ap- 
propriate ocean-atmosphere interac- 
tions. Empirical studies should be 
matched with construction of models 
to simulate the behavior of the atmos- 
phere in response to observed or in- 
ferred ocean-atmosphere interactions. 

It is clear that the research oppor- 
tunities in this general area are highly 
promising. Data are available to sup- 
port analyses of many cases, with ex- 
tensions to longer time-spans. The 
present recognition of a need for im- 
proved understanding of our environ- 
ment and better management of our 
natural resources is likely to stimu- 
late application. The research is likely 
to appeal to young talent in several 
disciplines. And the research is likely 
to provide important inputs to the co- 
operative schemes of the International 
Decade of Ocean Exploration and the 
Global Atmospheric Research Pro- 



The Prediction of Drought 

The National Weather Service is- 
sues monthly general forecasts of 
large-scale patterns of temperature 
and rainfall, and from such forecasts 
the likelihood of onset of drought dur- 
ing the month concerned may be in- 

ferred in general terms. At the present 
time, no specific techniques for pre- 
dicting drought exist. It is possible 
that a study of time-lag relationships 
for large areas could provide useful 
guidance. It is possible, too, that run- 
ning analyses of the conditions within 
the Pacific section of the equatorial 

belt and related studies of the re- 
sponses of the mid-latitude atmos- 
phere would provide useful prediction 
aids. Finally, the results of the above- 
mentioned studies are likely to be 
considerably sharpened through nu- 
merical experiments with dynamical 
simulation models. 



Artificial Alteration of Natural Precipitation 

The scientific basis of all efforts to 
modify precipitation artificially rests 
on manipulating the rates of reaction 
of natural precipitation mechanisms 
Our qualitative understanding of nat- 
ural precipitation mechanisms is in 
rather good shape. (See Figure VI-2) 
But our knowledge of the quantitative 

aspects of these processes is generally 
quite poor. There are several reasons 
for this state of affairs: 

1. The process rate coefficients are 
inadequately known. 

2. Several of the processes are 
competitive, so that small initial 

differences may give one of 
them an ever widening advan- 

3. The initial and boundary condi- 
tions are known to be important 
but are poorly understood and 
difficult to measure. 










slow broadening 
by coalescence 












freezing "" 














wet and dry 

riming with 

drops and 














^ partial 





■ melting ■ 


^ heterogeneous . 



In this flow chart, the precipitation process is seen to begin with water vapor and 
one of several different types of nuclei. Through various processes, the nuclei 
obtain vapor and grow. The final form of the precipitation depends on the environ- 
ment through which the precipitation falls. The various forms of precipitation that 
are observed in nature are listed at the bottom of the chart. By tracing their path 
upward through the chart, it is possible to determine the conditions necessary for 
their production. 



4. There are several feedback loops 
whereby a change in the micro- 
physical character of a cloud 
parcel, as a result of precipita- 
tion development, feeds back 
into the energetics of the cloud 
and thereby alters the boundary 
conditions in which the precipi- 
tation processes operate. These 
feedback loops are largely unex- 
plored. They range in scale from 
the release of heat of phase 
change, causing a small cloud 
parcel to accelerate upward, 
thereby increasing its conden- 
sate load, to large-scale, long- 
range effects whereby a major 
change in the cloud system at 
one point induces adjustments 
in the atmosphere tens or hun- 
dreds of cloud-diameters away. 

Natural Nuclei and their Relation 
to Weather Modification 

Almost all U.S. efforts to change 
precipitation through cloud seeding 
(whether to increase, decrease, or re- 
distribute either rain or snow) rest on 
the observation that the normal be- 
havior of a cloud can be altered 
through the introduction of large 
numbers of suitable nuclei. 

There are two types of natural 
nuclei, serving two different func- 
tions, in natural clouds: 

1. Cloud nuclei (small soluble par- 
ticles of the order of 0.1 to 3 
microns in diameter), which 
serve as condensation centers 
for liquid cloud droplets. 

2. Ice nuclei (probably clay min- 
erals about 1 micron in diam- 
eter, although the exact nature 
of these particles is still in ques- 
tion), which serve as centers of 
initiation of ice particles either 
by freezing drops or directly 
from the vapor. 

Ice nuclei are necessary for snow 
production. Snow generated aloft may 
melt inside a cloud on its way to the 

ground and land as rain. Rain may 
also be initiated by a few specially 
favorable cloud nuclei acting through 
an all-liquid process. 

The relative importance of the two 
known precipitation mechanisms is 
not fully worked out. However, it 
appears that the all-liquid process is 
more important in warmer seasons 
and in maritime air masses, whereas 
the ice-crystal mechanism is probably 
more important in colder seasons and 
in continental weather events. 

The ice-crystal mechanism of pre- 
cipitation development was the first 
precipitation process proposed. It ap- 
peared to explain most available ob- 
servations until the late 1940's, when 
meteorologists began to make meas- 
urements inside clouds and to examine 
them with radar. The all-liquid pre- 
cipitation mechanism was essentially 
unknown before about 1950; even to- 
day its relative importance is not clear. 

The common occurrence of super- 
cooled clouds was taken as evidence 
to show that concentrations of nat- 
ural ice nuclei were often insufficient 
for effective precipitation production. 
Proponents of seeding thus argued 
that, through the addition of artificial 
nuclei, one could enhance the effi- 
ciency of the ice-crystal mechanism 
and thereby increase rain at the 

Technology quickly provided effi- 
cient tools for releasing large numbers 
of artificial ice nuclei. Present-day 
seeding generators, burning an ace- 
tone solution of silver iodide (Agl), 
yield effective ice nuclei concentra- 
tions of about 10 13 to 10 14 crystals 
per gram of Agl at —10° centigrade, 
increasing to about 10 1 '' crystals per 
gram of Agl at —20° centigrade. This 
means that a single gram of Agl, if 
completely and properly dispersed, 
would be capable of seeding 100 cubic 
kilometers. Technology has not yet, 
however, produced adequate tools for 
measuring the concentrations of nat- 
ural ice nuclei. 

A more realistic, more scientific ap- 
proach to cloud seeding for altering 
precipitation is beginning to emerge. 
This approach recognizes, and at- 
tempts to relate, several interdepend- 
ent factors: 

1. There are two known precipita- 
tion mechanisms, only one of 
which depends on ice nuclei and 
only one of which is readily 
accessible through present-day 
seeding technology. 

2. The concentrations of natural 
nuclei, both cloud and ice par- 
ticles, and the internal structure 
of clouds of any given type 
differ importantly from time to 
time and place to place. For 
example, a substantial differ- 
ence between cloud spectra in 
maritime and continental cumuli 
is recognized as due to differ- 
ences in the cloud nuclei; ba- 
sically, it is this difference in 
drop spectra that gives mari- 
time clouds their propensity for 
warm rain. As a consequence 
of such differences, natural 
clouds differ markedly in their 
response to seeding. 

Not all responses to seeding 
are desirable. To give an ex- 
ample, Project WHITETOP 
found that Agl seeding of 
summertime cumulus clouds in 
Missouri may have decreased 
the rainfall by as much as 40 
to 50 percent on days with 
south winds. 

3. The development of precipita- 
tion takes considerable time, in 
many cases about the same as 
the lifetime of the cloud parcels 
that nurture the precipitation 
development. Thus, most seed- 
ing efforts attempt to alter the 
time required for precipitation 
development relative to the life 
of the cloud, or, alternatively, 
attempt to extend the life of 
the cloud by activating feed- 
back loops between changes in 
cloud microstructure and cloud 



energetics. The seeding of small 
cumuli over Florida and over 
nearby ocean areas aims at 
complete glaciation of the 
clouds to secure the maximum 
release of latent heat of fusion, 
which in turn might cause 
greatly expanded cloud devel- 

4. The optimum number of ice 
particles (hence the seeding re- 
quirement, if any) depends in a 
complex way on the detailed 
nature of the cloud and the de- 
sired end product. For example 
the Bureau of Reclamation 
project in Colorado aims at 
regulating the number of snow 
crystals in the clouds to be the 
minimum required in order that 
their combined growth rate just 
uses up the liquid water of the 
cloud by the time the cloud 
reaches the crest of the moun- 
tain divide. A lesser number 
would permit cloud liquid water 
to pass over the divide and be 
evaporated. A larger number, 
and slower growth, might result 
in individual crystals being too 
small to fall out before crossing 
the divide. 

Requirements for Scientific 
Cloud Seeding 

The modern approach to cloud 
seeding is to couple the treatment 
method to the end object through 
specification of the target cloud and 
a knowledge of the intermediate phys- 
ical processes. To accomplish this re- 
quires elaborate systems for real-time 
measurement of deterministic meteor- 
ological factors, and real-time com- 
puter modeling of the physical proc- 
esses of the clouds to permit objective 
decisions as to when, where, and how 
to seed. 

Data Base and Related Technology 
— The data base on which to develop 
a scientific approach to cloud seeding 
is uneven. In some areas it is fairly 
good, in others almost totally lacking. 

The physical properties of cloud and 
precipitation particles, and the par- 
ticle-interaction coefficients, though 
incomplete, are sufficient for most 
purposes. Given an initial specifica- 
tion of cloud properties, one can make 
usable estimates of the growth of a 
limited number of precipitation par- 
ticles contained therein. Once the pre- 
cipitation particles become sufficiently 
numerous to interact appreciably, or 
in the ever present case of the inter- 
action of cloud drops, the bottleneck 
is not so much the lack of physical 
data as one of computer capability and 
mathematical devices to allow one to 
keep track of the large number of pos- 
sible interactions. 

A more serious difficulty is the gen- 
eral lack of data on the internal micro- 
structure of clouds as a function of 
cloud type, season, geography, and 
meteorological situation. Instruments 
for measuring ice and cloud nuclei are 
essentially laboratory devices and 
really not suitable for routine field 
use. Only recently have tools been 
developed for routine measurement of 
cloud-particle spectra. We have many 
measurements of nuclei and cloud- 
particle spectra from research proj- 
ects, but we still lack appropriate con- 
cepts for generalizing them in ways 
to permit useful extension to the un- 
measured cloud situation. 

Interactions and Downwind Effects 
— The feedback loops between the 
physics of particles inside clouds and 
the energetics of those clouds is al- 
most totally unexplored. One can per- 
ceive a definite effort in this area in 
cloud physics today. Important ad- 
vances are likely to come quickly in 
terms of the interactions inside single 
clouds. But the equally important 
problem of interaction between clouds 
and cloud systems on the mesoscale 
seems much more difficult. Such in- 
teractions are well known for the case 
of natural clouds. One should suspect 
them — indeed, there are signs point- 
ing to them — in the case of clouds 
altered by seeding. For example, 
measurements on Project WHITETOP 
indicated strongly that changes in 

rainfall due to seeding were accom 
panied by changes of opposite sign 50 
to 100 miles downwind. 

Water and Energy Budgets of 
Clouds — An area of general meteor- 
ology of great importance to cloud 
seeding, and still inadequately ex- 
plored, concerns the water and energy 
budgets of clouds and cloud systems. 
Seeding to change precipitation pre- 
sumes to alter the water budget of the 
target cloud system, yet studies of 
the water and energy budgets of 
mesoscale weather systems are almost 
totally lacking. Braham carried out 
such a study for thunderstorms in 
1952. A study of the water budget of 
the winter storms involved in the Bu- 
reau of Reclamation seeding project 
in Colorado is presently under way. 
Virtually no other mesoscale weather 
system has been so studied. The rea- 
sons for this are primarily the inade- 
quacy, for this purpose, of data from 
the National Weather Service and 
the great cost of obtaining additional 
data specifically for such studies. Yet 
cloud seeding can never be soundly 
based until we know in considerable 
detail the water budgets of both the 
natural and treated storms. 

Looking to the Future 

The preceding paragraphs are con- 
cerned mainly with topics in physical 
meteorology concerned with seeding 
clouds to alter the amount of precipi- 
tation at the ground. There are a 
number of other matters that must be 
resolved before such seeding can be 
adequate for public purposes. Some 
of these are scientific in nature, others 
are issues of economics, sociology, 
and public policy. 

Unanswered Questions — Among 
the most important issues to be faced 
are four unanswered scientific ques- 

1. Under what specific meteoro- 
logical conditions (including mi- 
crophysics and energetics of 
clouds) will a particular treat- 



merit technique result in a pre- 
dictable cloud response? 

2. Which of the various possible 
cloud responses would be useful 
to society, in what ways, and 
under what conditions? 

3. Given that a useful cloud re- 
sponse can be predicted from 
a particular treatment of some 
specific set of initial cloud con- 
ditions, are our abilities and 
tools for diagnosing the occur- 
rence of these conditions suf- 
ficient to permit exploitation of 
such treatment? In what time- 
space scale? In what economic 

4. What is the proper division of 
resources between : 

(a) basic research, where the 
sought-for end product en- 
hances knowledge about 
clouds and their physical 
response to seeding; 

(b) pilot projects, where the 
chief objective is assess- 
ment of the economics of 
a particular cloud-modifi- 
cation scheme; and 

(c) field operations, where the 

principal aim is to maxi- 
mize the field of a changed 
weather element? 

Projected Scientific Activity — Be- 
cause of the complexity of the at- 
mosphere and our limited knowledge 
about modifying it, it is likely that 
the skill in recognizing seeding op- 
portunities can be developed only 
from the results of a number of care- 
fully designed experimental projects 
aimed at testing seeding hypotheses 
in various types of weather situations 
in different parts of the country. 
Project WHITETOP and the Bureau 
of Reclamation Upper Colorado Pilot 
Project are examples of what these 
projects might look like, each of 
which will require from three to ten 
years. Until such studies are carried 
out, scientists will probably be unable 
to specify how much precipitation can 
be changed, under what conditions, 
and how often these conditions oc- 
cur. Technology is already at hand 
and scientific principles of experi- 
ment design are known. We must, 
however, be prepared to accept dis- 
advantages as well as advantages to 
the underlying population. 

Economic and Social Implications — 
The interactions of cloud seeding with 
society are clearly enormous, but they 
are hard to detail because we lack 

firm information as to how much and 
how often precipitation can be modi- 
lied, and also because most studies 
have emphasized the scientific as- 
pects with little regard for the eco- 
nomic, social, and political issues. 

Since there are few places in the 
United States where the economy is 
tied to a single economic enterprise, 
almost any change in precipitation is 
likely to disadvantage some while 
working to the advantage of others. 
We sorely need studies to learn the 
full scope of public cost and public 
benetit of changes in weather. We 
can start by using the natural vari- 
ability of weather and determine just 
how a departure of weather from 
long-term normality works its way 
through the economy of a region. 
Such studies — involving the collec- 
tive effort of sociologists, economists, 
and meteorologists — should be en- 

Even with such knowledge, one 
comes ultimately to the thorny is- 
sues of how we decide when and 
where to practice weather modifica- 
tion, and how the disadvantaged are 
to be compensated. Will insurance 
companies, for example, "pay off" in 
a region of cloud seeding if evidence 
develops that increasing rainfall also 
increases hail? 

The Status of Precipitation Management 

Research and operational weather- 
modification programs since the late 
1940's have served to identify proce- 
dures that appear related to precipi- 
tation increases. At the same time, 
these results have indicated areas 
where real understanding and com- 
petence are insufficient. 

A number of cloud-seeding tech- 
niques have been developed. Ground- 
based seeding with silver iodide 
(Agl), whose crystal structure re- 
sembles that of ice (see Figure VI-3), 

is the most common technique, espe- 
cially for winter storms in moun- 
tainous terrain. The seeding ma- 
terial is carried aloft by vertical 
motion resulting from the instability 
of the air or from the lift due to the 
mountain barrier. One remaining 
fundamental problem involves diffu- 
sion of the seeding material. Proper 
seeding procedures require (a) that 
the proper number of nuclei reach the 
effective level in the cloud, and (b) 
that the effect of the seeding will be 
felt in the desired location on the 

ground. The diffusion process is a 
rather complex function of vertical 
temperature distribution and the 
three-dimensional wind field. 

Airborne seeding with silver iodide 
or crushed dry ice is frequently em- 
ployed with summer convective 
storms. The primary limitation of 
aerial operations is whether or not 
the aircraft can fly in weather condi- 
tions where seeding will be effective. 

Various experimental designs and 
statistical evaluation procedures have 




The models show the crystal structures of ice and silver iodide (Agl). In the model 
of Agl, the white spheres are iodide ions and the black spheres silver ions. Although 
the crystal structures of both molecules are similar, the lattice constant of Agl is 
1.5% larger than that of ice. Partial compensation for the difference can be made by 
coprecipitating silver bromide (AgBr) with Agl and substituting Br for as many as 
30% of the I atoms in the Agl crystal structure, which will produce a unit cell up to 
0.5% smaller than that of pure Agl. 

been used. In retrospect, some of 
them were inadequate. Nevertheless, 
the early programs did show that 
cloud seeding has a tremendous po- 

While the bulk of the activity in 
precipitation augmentation involves 
seeding clouds with artificial nuclei, 
other procedures have been proposed 
and are being studied. Modification 
of radiation processes is an example. 
If a large area (several acres or more) 
is covered with asphalt, the increased 
heating of the air immediately over 
the area can lead to strong convective 
currents, sufficient under some cir- 
cumstances to stimulate the precipita- 
tion process. Another possibility in- 
volves increasing the humidity high 
in the air so that more water would 
be available for the natural precipita- 
tion processes. Several ideas have 
been offered for extracting water 
from coastal stratus clouds. 

The obvious goal for weather- 
modification research, considered as 
a whole, is to find the best system 
for any given situation. However, 
the wide variety of conditions under 
which clouds and storms occur, cou- 
pled with the different types of to- 
pography over which these clouds 
develop, show that several, perhaps 

many, procedures must be available 
to get the best results from every 
situation. It is unlikely that the real 
world will ever see a truly "best" 
system for all conditions. A reason- 
able procedure, short of finding the 
absolute "best" way, is to put the 
available techniques, equipment, and 
instrumentation together in such a 
way that, under the existing condi- 
tions, the desired effect is maximized. 
In other words, optimize the available 

What Constitutes a Precipitation 
Management System? 

A true precipitation-management 
system, even a crude and inefficient 
one, will have four major compo- 
nents: (a) a component to analyze 
present and expected water needs 
and water sources, as well as the 
anticipated effects of precipitation 
management on such factors as the 
economy and ecology of the area in 
question, and arrive at a decision 
to employ precipitation-management 
techniques; (b) a component to recog- 
nize a weather situation where the 
application of precipitation-manage- 
ment techniques would result in the 
desired effect and also, hopefully, 
those situations in which the result 

would be deleterious; (c) j compo- 
nent to select the proper treatment 
material and delivery system for the 
situation at hand; and (d) a com- 
ponent to assess the actual results of 
the treatment in terms of useful water 
on the ground, economic benefits and 
disbenefits, and environmental con- 

Analyzer Function — The first ac- 
tivity of an operational system is to 
determine when the application of 
precipitation-management techniques 
could contribute to the resolution of 
a water problem of a particular area. 
After the specific need is defined, 
the various potential sources of addi- 
tional water (e.g., the atmosphere, 
water mining, re-use) are examined 
to find the best way to fill the need. 
The effects of the application of 
precipitation-management techniques 
on the economics, ecology, and so- 
ciology of the area are examined. 

Another important consideration 
is whether or not the increased pre- 
cipitation would fall where a sub- 
stantial portion of it would eventually 
be usable. There are also legal ques- 
tions that must be looked at, such 
as ownership of the land being af- 
fected, ownership of the moisture 
being withdrawn, licensing and in- 
demnification procedures, and report- 
ing procedures. 

When all the available informa- 
tion has been considered, a decision 
is made. Precipitation-management 
techniques may be inappropriate for 
a variety of reasons, or they may be 
the only techniques available. Usu- 
ally, however, precipitation manage- 
ment will be used in addition to other 
methods of acquiring additional 

Recognition — Once a decision has 
been made to use weather modifica- 
tion in the solution of a problem, 
treatable situations must be identi- 
fied. Many of the necessary condi- 
tions for successful weather modifica- 
tion are known, at least qualitatively, 
but we do not yet know if these are 
sufficient conditions. 



One important factor in determin- 
ing whether or not a given weather 
situation is treatable is the number of 
natural nuclei. Nuclei are needed to 
convert vapor into liquid; other nuclei 
are needed to convert liquid into ice. 
The presence of ice crystals is con- 
sidered critical to precipitation for- 
mation in most clouds that occur in 
the middle latitudes. If liquid drop- 
lets are present at temperatures below 
freezing, a nuclei deficit is implied. 
Such a deficit in an otherwise suitable 
cloud can be overcome by the addi- 
tion of artificial nuclei. The addi- 
tional nuclei will convert some of the 
droplets into ice crystals, which will 
grow at the expense of the liquid 
droplets until they are large enough 
to fall out, thereby initiating or in- 
creasing precipitation. There are few 
routine observations of natural nuclei 
numbers, and most counts are made 
at the surface, not aloft where the 
clouds are. We have only rather 
crude notions of how many nuclei 
are needed in any given situation. 

Some of the other factors of im- 
portance in the treatability of a 
weather system are temperature 
structure, wind, liquid water content 
of the cloud, and cloud-droplet size 
spectra. Again, we have fairly good 
qualitative understanding of the role 
of each factor, but we do not com- 
pletely understand all the links in the 
physical chain of events leading to 
the desired result of the modification 
attempt. In addition, some of the 
pertinent factors are difficult to meas- 
ure. Still other factors may be im- 
portant in cloud treatability, but our 
knowledge of them in real cloud 
situations is too meager even for 
qualitative statements. 

In some situations, theory and em- 
pirical evidence have been united in 
mathematical models. These models 
simulate the atmosphere and can 
predict the response of the cloud to 
a given treatment. While the models 
available today are comparatively 
crude, they play a valuable role in 
enabling scientists to recognize treat- 
able situations. 

Treatment — After a situation is 
identified as treatable, the appropriate 
materials and techniques must be 
chosen. The most frequently used 
materials for weather-modification ac- 
tivities are Agl and dry ice, but many 
other substances have been used ex- 
perimentally (salt, lead iodide, cal- 
cium chloride, and a host of organics 
including metaldehyde, phlorogluci- 
nol, urea, and 1,5-dihydroxynaph- 
thalene). The temperature at which 
each of these agents becomes effective 
is fairly well known (see Figure VI-4), 
as is the particle-size requirement (for 
Agl, on the order of 0.1 micron). 

Clouds can be classified into two 
categories, cold and warm. Cold 
clouds are those with temperatures 
wholly or partly at or below 0° cen- 
tigrade. Warm clouds are those ev- 
erywhere warmer than U centigrade. 
Materials that affect cold clouds 
rarely have any effect on warm clouds. 
Thus, the treatment material must 
be matched to the situation. The 
object is to change the size and/or 

state of the cloud particles. Precipi- 
tation from warm clouds can be in- 
creased if the small droplets can be 
turned into big droplets. 

Hygroscopic materials should be 
effective in warm clouds. They are, 
in fact, being used experimentally, 
though it has proved difficult both to 
get the material ground to a small 
enough size to stay in the cloud long 
enough to be effective and to keep the 
particles dry until they are released 
to the atmosphere. Once a few drop- 
lets large enough to begin to fall are 
formed, coalescence should keep the 
process going until precipitation falls 
out of the cloud. 

Hygroscopic materials should also 
be effective in cold clouds, but mate- 
rials that initiate a phase change are 
more efficient. Some cold-cloud 
agents, such as dry ice, simply cool 
the air and the vapor and liquid in 
it to a temperature at which tiny ice 
crystals form spontaneously. This 
process is effective at air tempera- 
tures a few degrees below freezing 



Effective Temperature 


Carbon Dioxide 







Loam-Rugby, N.D. 


NH 4 F 


v 2 o 5 


Loess-Hanford, Wash. 


Cdl 2 


Soil-Baggs, Wyo. 


I 2 


Ash-Crater Lake, Ore. 


Dust-Phoenix, Ariz. 








The table lists some of the more prominent substances that are used as nucleating 
agents and the temperature at which they become effective as nuclei. 



or lower. Other materials, such as 
silver iodide, are known to be effec- 
tive, but why they work is not clearly 
understood. The crystal structure of 
Agl is quite similar to that of ice, 
and this was thought to be the rea- 
son for its effectiveness. Recent stud- 
ies suggest that pure Agl is a rather 
poor nucleating material, and that it 
must be contaminated with some 
other material to be useful in weather 

Different methods are needed to 
deliver the various materials to the 
cloud. Dry ice is dropped into clouds, 
usually from an airplane. The size of 
the dry-ice pellets depends on the 
vertical thickness of the cloud. Silver 
iodide can be released from the air 
or from the ground. Ground releases 
rely on the horizontal and vertical 
airflow to carry the material to the 

One major problem is to confine 
the effects of treatment to a desig- 
nated target area. The point on the 
ground where the effects will be felt 
is determined by the point of release 
of the material, the concentration of 
the material at the release point, the 
diffusion of the material (a function 
of the three-dimensional wind field), 
the time required for the material to 
become effective once it is in the 
cloud, and the time required for the 
altered cloud characteristics to show 
up on the ground. The usual pro- 
cedure involves assumptions about 
mean values and average times, with 
reliance on the skill of the operator 
to integrate the various factors sub- 
jectively. Several mathematical mod- 
els have been developed that predict 
the area of effect; as these models, 
and the data they use, improve, tar- 
geting procedures should also im- 

Despite the uncertainties in how 
the material works, how much is 
needed, and where and how it should 
be released, present capabilities are 
sufficient to warrant a certain number 
of operational precipitation-modifica- 
tion programs. In these cases, the 

areas to be affected are relatively 
small and the objectives sufficiently 
narrow so that the uncertainties can 
be taken into account in the program 

Evaluation — The final phase of 
a functioning weather-modification 
system is evaluation of the results. 
Evaluation techniques include the 
standard statistical approaches: target 
vs. control; treat vs. no treat; ran- 
domized crossover, and so on. Both 
parametric and nonparametric statis- 
tics are used. A few new variations 
have been considered but are not 
being used except experimentally. 
Given a suitable experimental design, 
existing statistical evaluation pro- 
cedures are acceptable for programs 
that go on for several years and in 
which the evaluation can wait until 
the end of the program. 

Full evaluation includes not only 
the amount of precipitation produced 
but also the economic consequences 
of the activity and the effects on the 
social and biological environment. 

Current Scientific Status 

Large quantities of data at or near 
the earth's surface have been gath- 
ered from experimental areas. Upper- 
air data are generally insufficient in 
terms and frequency and density. 
Because most weather-modification 
activities are rather small and inde- 
pendent of one another, data gather- 
ing is not standardized with respect 
to time of observation, duration, 
precision, or reliability. Some of the 
data from commercial programs are 
not readily available. Perhaps the 
greatest limitation of the present data 
base is the scarcity of measurements 
of some of the important factors in 
precipitation augmentation, such as 
natural nuclei counts. Lack of suit- 
able instruments is, in part, respon- 
sible for this situation. 

Extra-Area Effects — While scien- 
tists have not had the quality data 
they would have liked, significant 
advances have occurred in the past 

few years. One interesting phenom- 
enon was recently recognized: In 
major field programs for increasing 
rain, changes in the precipitation pat- 
tern well outside the designated tar- 
get areas have been noted. The 
changes were patterns of negative 
and positive anomalies, but the in- 
creases were more substantial than 
the decreases. This suggests that 
some sort of dynamic effect is caused 
by cloud seeding, resulting in an 
average precipitation increase over a 
very large area. These effects are 
sometimes felt upwind and laterally 
as well as downwind of the target 
area. In at least one experiment, the 
precipitation of an entire area was 
increased, with target-area precipita- 
tion significantly greater even when 
compared with the precipitation-in- 
creased controls. How universal these 
effects are and under what conditions 
they occur are not clearly understood. 
The importance of this phenomenon 
in evaluation is obvious. 

The Significance of Cloud-Top 
Temperature — One of the most im- 
portant discoveries of the 1960's was 
identification of the importance of 
cloud-top temperature on the effec- 
tiveness of cloud seeding. Stratifica- 
tion of data by temperature indicates 
large precipitation increases from 
seeded winter orographic clouds when 
the temperature at or near the cloud 
top is between about — 15 and — 20° 
centigrade. When the temperature is 
— 25 or colder, precipitation de- 
creases from the same kind of clouds 
are observed. This suggests that suf- 
ficient natural nuclei have a negative 
influence on the precipitation process. 
Figure VI-5 summarizes some of the 
above data. 

Technological Improvements — Im- 
portant advances have been made 
in finding seeding materials other 
than silver iodide and dry ice. Many 
organic and inorganic materials have 
been studied in the field and in the 
laboratory. Several of the organics 
have been found superior to silver 
iodide in many respects, including 
cost, and work is progressing on 








Scale Change 

Sample Size 


Climax 1 

Climax II 


Climax 1 

Climax II 


-35 thru - 






S32, N34 

S18, N17 

S43, N61 

-25 thru - 



- 1 

- 5 

+ 6 
- 1 


S53, N56 

S23, N32 

S57, N63 

-20 thru - 



+ 100 



S35, N41 

S20, N17 

S64, N69 







Scale Chanj 


Sample Size 


Climax 1 

Climax II 


Climax 1 

Climax II 


to <0.7 







S24, N21 

S15, N12 

S33, N33 

0.7 to < 1.3 


+ 11 
+ 8 

+ 5 
+ 16 


S76, N86 

S36, N42 

S58, N81 

1.3 to <2.0 


+ 53 
+ 100 



S20, N24 

S10, N12 

S73, N79 






Scale I 


Sample Size 


Climax 1 

Climax II 

Climax 1 

Climax II 

thru 11 



- 2 

+ 4 

S25, N27 

S15, N17 

12 thru 16 



+ 9 

S27, N21 

S16, N13 

17 thru 21 




S28, N28 

S9, N12 

22 thru 27 




S26, N25 

S12, N13 

28 thru 43 




S14, N30 

S9, Nil 

The table presents stratified data from three sets of experiments in an effort to show 
what factors are important in seeding in Colorado during the winter. The optimum 
conditions are summarized as follows: (1) the 500 mb temperature should be 
between -11° and -20°C; (2) the computed vertical gradient of potential con- 
densate in the 700-500 mb layer should be 1.3 to 2.0 g/kg/100 mb; and (3) the 500 
mb windspeed should be between 22 and 27 mps. The probability of each of these 
events has been computed, but is not presented here. 

making them suitable for operational 

Closely connected with new seed- 
ing materials are advances in delivery 
systems. Increased understanding of 
diffusion processes now puts posi- 

tioning of generators, either airborne 
or ground, on a more objective basis. 
New devices for producing nuclei 
permit more efficient use of nuclei 
material. Advances in radar tech- 
niques, coupled with improved under- 
standing of cloud characteristics and 

dispersion properties, permit safer 
and more effective use of aircraft in 
seeding operations. The use of rocket- 
launched, pyrotechnic seeding de- 
vices is receiving considerable atten- 

Modeling — Mathematical models 
play an increasingly important role in 
both research and operational precipi- 
tation-augmentation programs. They 
are used operationally in recognizing 
treatable situations, in choosing par- 
ticular clouds to seed, in specifying 
the position of mobile generators so 
that the effect will be felt in the target 
area, and in specifying the area of 
effect from fixed generators. These 
models, developed from the basic 
laws of physics, are usually relatively 
simple, and can be run on moderate- 
size computers in near real-time. 

More sophisticated models have 
been used only for research pro- 
grams, in part because present- 
generation computers are not capable 
of handling them in the time-scale 
needed for operational use. The value 
of these models lies in suggesting 
effects to look for in the field and 
in suggesting factors to be studied 
in more detail. Three types (scales) 
of models are currently available: 
(a) microphysics models, which con- 
sider the formation and growth of 
water droplets and ice crystals; (b) dy- 
namic models, which consider motions 
and processes within the cloud (see 
Figure VI-6); and (c) airflow models, 
which consider cloud-forming proc- 
esses. None of these models alone is 
adequate to describe the complexities 
of precipitation augmentation; several 
attempts are being made, therefore, 
at combining or chaining them. 

Implications for Society 

Precipitation augmentation is be- 
coming an active partner with the 
other components of the water-re- 
sources system. In many parts of the 
nation, it may prove to be the most 
economical and socially acceptable 
method to increase usable water 




10 p 1 1 1| 1 1 1 1 1 1 1 1 1 1 1 i i i| n 1 1 1 " 1 1 1 1 1 1 1 1 m i| m 1 1 1 1 1 1 1 m 1 1 1 1 1 1 1 1 1 1 1 1 " 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 



6 - 

4 r 
3 - 


/ — s 

ll 1 1 1 ll II I ll 1 1 1 1 1 1 1 1 ll 1 1 1 1 1 


I m ■ I ■ n i I i i i i 1 i i i i I i l i l I l i l l I l l l I t n ir 

12 3 4 


10 11 12 13 


15 16 17 18 19 20 

io um | nu |i nn m i n i mum ii mnn |i ii hh ii|i hh i hhh i hh mm i|i n i|i n i| m i pn i; m it 

9 - 


1 2 


1 1 1 1 1 1 1 i 1 1 1 1 

i i i i 1 1 i i 1 1 i i i i I i i t | [ i i i i 1 1 i 




19 20 

The two diagrams demonstrate a silver iodide seeding experiment done on computer- 
generated clouds. The numerical model simulates the growth of cumulus-type clouds 
forming over a mountain ridge in a domain 20 km wide and 10 km high. The general 
environmental airflow is from left to right. Clouds have formed to the left in the 
model and grown to form an anvil present at 7 km. The upper diagram shows the 
non-seeded case; the bottom, the seeded case. Seeding is simulated by changing all 
cloud liquid to cloud ice and the rain to precipitating ice at -10°C instead of 
— 25°C in the natural (non-seeded) case. The hail (or graupel) shown is in concen- 
trations greater than 1 gm of hail per kg of air. Rain is in concentrations greater 
than 1 gm per kg. These results demonstrate the effects of overseeding — less rain 
and less hail come from the seeded cloud since the large amounts of cloud ice that 
form are carried aloft and downwind in the anvil. 



Furthermore, precipitation augmen- 
tation affects other natural resources 
besides water. For example, an in- 
crease in precipitation will have an 
effect on the natural plant and animal 
communities in and around the target 
area. Extra water on the soil may 
bring additional lands into grazing 
capability, but it may also hasten the 
leaching of nutrients. The availabil- 
ity of additional water may cause 
changes in man's use of the land. He 
may change the kinds of crops he 
grows. He may reap greater harvests 
from smaller acreage. None of these 
effects, however, is expected to be 

Potential Benefits — The interac- 
tions between man, his institutions, 
and precipitation augmentation are 
important. The direct benefit of addi- 
tional precipitation is that it helps to 
assure an adequate supply of water 
for municipal, industrial, and agricul- 
tural uses. Secondary benefits include 
the generation of low-cost electricity 
and assistance in abating air and 
water pollution. Relatively small op- 
erational projects for water supply 
and power generation have existed 
for years. What is needed is an in- 
tegrated program in which many 
benefits can be realized from one ac- 

Potential Liabilities — Precipitation 
augmentation does have associated 
liabilities. A few people object to any 
deliberate tampering with nature, 
some on moral or religious grounds / 
others simply on aesthetic grounds. 
Some of those who live or work in 
the target areas of augmentation op- 
erations could suffer financial loss, 
especially where the economic bene- 
fits are derived some distance away 
from the target area. Increased pre- 
cipitation in the form of snow could 
decrease the growing season and the 
tourist season. Erosion could increase 
slightly — although, alternatively, in- 
creased vegetation from the addi- 
tional moisture could cause erosion to 
decrease. Undesirable plant life may 
increase in certain areas. Increased 

snow could raise snow-removal costs 
(although an estimate made for the 
Colorado Rockies indicates no such 
effect for a 10 to 20 percent snow 
increase). Potential liabilities exist, at 
least in theory, in the possible extinc- 
tion of a few species of flora or fauna 
and in the modification of river chan- 
nels. The net value of precipitation 
augmentation must include determi- 
nations of the relative importance of 
man, nature, and their interaction. 

Legal Issues — Precipitation man- 
agement raises a variety of legal is- 
sues. Who owns the water in the 
atmosphere? How should losses re- 
lated to precipitation augmentation 
be compensated? How should opera- 
tions be regulated? How should the 
money to pay for operations be ac- 
quired (taxation by water district, 
state tax, federal funds, etc.)? Should 
research projects be treated differ- 
ently from operational projects with 
respect to liability? When water 
needs in one state can be helped by 
precipitation augmentation in an- 
other, who makes the decisions? 

Normative Issues — There are some 
reputable scientists who believe that, 
while seeding does affect certain 
cloud characteristics, there are too 
many conflicting results from cloud- 
seeding experiments to say that ob- 
served precipitation increases from 
seeded clouds were caused by the 
seeding. But the majority of scien- 
tists who question precipitation aug- 
mentation ask not "Does it work?" 
but "Should we use it?" In other 
words, precipitation augmentation, 
while far from perfected, is con- 
sidered by such scientists to be an 
operational reality. Precipitation aug- 
mentation today is thus in a position 
similar to that of nuclear power 
plants several years ago. Discussions 
center largely on the risks to people 
and the environment and on eco- 
nomic feasibility rather than on sci- 
entific capability. Answers to these 
questions await interdisciplinary stud- 
ies of real and hypothetical situations. 

Requirements for 
Scientific Activity 

The practical objective of current 
precipitation -augmentation research 
is the development of a precipitation- 
management system. The system in- 
cludes more than the ability to 
analyze water needs, recognize op- 
portunities, treat opportunities, and 
evaluate results. A fully developed 
system includes the ability to specify 
the results of treatment in advance 
with a high degree of confidence. It 
includes the ability to specify the 
areas that will be affected by the 
treatment, as well as the ability to 
assess beforehand the environmental 
consequences. Such systems need to 
be developed and thoroughly tested. 

To provide solid answers to the 
many unanswered questions of pre- 
cipitation augmentation, some im- 
proved instrumentation must be ac- 
quired. Some sort of standard nuclei 
counter is needed. Radar systems 
specifically designed for weather 
modification are needed to replace the 
surplus military equipment now be- 
ing used. A variety of airborne and 
surface remote-sensing devices would 
be useful. Especially needed are de- 
vices for determining the moisture 
distribution in the air from the sur- 
face to about 18,000 feet. Cloud- 
particle samplers are needed for 
cloud physics measurements. Several 
versions are available, but none pro- 
vides the scientist with all he needs 
to know. 

Accurate recognition of treatable 
situations is not yet a purely objective 
procedure. Better definition of the 
essential weather conditions is needed. 
Factors such as moisture flux are not 
easily measured on the scales needed 
for precipitation-augmentation proj- 
ects. Mathematical models and the 
computers to run them should be an 
integral part of the recognition sys- 
tem. Improved instrumentation will 
be needed to acquire the data for 
the system. 

The search for more effective treat- 
ment techniques must go forward. 




The diagram shows the concentration of ice nuclei observed in Seattle. Washington, 
from 1 July to 3 November 1968. The scale gives the numbers of ice nuclei per 
300 liters of air active at — 21 °C. The concentrations measured in the city were six 
times greater than the concentrations of nuclei measured at two unpolluted non- 
urban sites. From the plot of the concentrations on the wind rose, it is possible to 
deduce that there are sources of nuclei SW and SSW of the sampling site, which 
was in the northeastern part of the city. Analyses show that man-made sources of 
ice nuclei dominate over natural ones. Just what effect these nuclei have on the 
microstructure of clouds, and the development of precipitation, is not known, 
although studies in a growing number of cities seem to show that precipitation 
increases downwind of industrial areas. 

Less expensive and more readily avail- 
able materials are needed. Seeding 
materials that have beneficial side 
effects (such as fertilizing character- 
istics) or no side effects are desirable. 
More precise delivery techniques are 
needed so that the results of the treat- 
ment can be properly targeted and 
so that the optimum effect can be 

Better specification of the extra- 
area effects recently discovered is 

necessary for both targeting and eval- 
uation. The causes of the extra-area 
effects need to be understood so that 
the recognition and treatment systems 
can take the effects into considera- 
tion. Inadvertent modification of 
clouds by atmospheric pollutants is 
another vital but little understood 
issue. (See Figure VI-7) In some 
situations, inadvertent modification 
can be controlled. In others, it cannot 
be controlled but can be considered 
as a factor in the precipitation-aug- 

mentation system. Similarly impor- 
tant are the interactions between two 
or more neighboring augmentation 

Advanced studies of both the posi- 
tive and negative interactions of pre- 
cipitation augmentation with other 
systems need to be carried out. Fac- 
tors in the natural environment will 
be affected by changes in precipi- 
tation. Short- and long-term con- 
sequences must be assessed from 
scientific, economic, and cultural view- 
points. The studies should not be 
limited to just the more obvious is- 
sues, such as ecological effects. The 
studies should consider the entire 
environmental system, which in- 
cludes man. 

Increasing interest in the environ- 
ment by both the scientific commu- 
nity and concerned citizens' groups 
argues for a more deliberate study of 
the environment as a system. Much 
literature has been circulated recently 
suggesting our impending doom if 
the quality of the environment con- 
tinues to deteriorate. Other studies 
have shown that severe water short- 
ages will be widespread by the year 
2000. While some of these state- 
ments may not be rigorously based 
on fact, they do suggest the impor- 
tance of early development of a tech- 
nology that can play a role in en- 
hancing both the quality and quantity 
of the water portion of the envi- 

How rapidly the fully devel- 
oped precipitation-augmentation sys- 
tem described above can be made 
available is in part a function of the 
level of effort. The first such system 
could be operational by 1975. This 
system will be effective for win- 
ter orographic storm situations in 
sparsely populated, high-elevation 
areas. Shortly thereafter, a similar 
system for convective clouds could 
be operational. Through evolutionary 
processes, systems for other cloud 
situations, and improved versions of 
the first, could be available by the 



3. FOG 

Modification of Warm and Cold Fog 

The principal impetus for the de- 
velopment of methods for modifying 
fog has come from civil and military 
aviation. Despite improvements in 
instrument-landing techniques, dense 
fog over an airport severely restricts 
or prevents aircraft landings and 
takeoffs. Such occasions, even if they 
last only a few hours, impose sub- 
stantial financial penalties on the air- 
lines, cause inconvenience and loss 
to the traveling public, and delay or 
abort military missions. Dense fog 
is also a serious hazard for marine 
and surface transportation. (See Fig- 
ure VI-8) On the other hand, fog is 
beneficial in certain forested regions 
in which the fog-drip from the trees 
supplies significant moisture. 

The time- and space-scales of fog 
and its frequency of occurrence are 
all small enough that no large-scale 
changes of climate appear likely even 

if all fogs were to be dissipated. 
However, the climate of certain local 
areas with a high incidence of fog 
would certainly be changed if the 
fog were eliminated. 

Cold Fog 

Modification of supercooled, or 
"cold," fogs by seeding them with 
ice nucleants has developed to the 
point of operational use at a number 
of airports where such fogs are rela- 
tively frequent. The scientific basis 
for modifying supercooled fogs is 
well established; the remaining prob- 
lems involve the engineering of reli- 
able and economical operational 
equipment and procedures. 

Nucleants — Cold fogs are seldom 
supercooled by more than a few de- 
grees centigrade and, therefore, the 


The photograph shows a section of an interstate highway running through the valleys 
of central Pennsylvania. The valley in the foreground is clear, with excellent driving 
conditions. Once the driver enters the gap between the valleys, however, visibility 
begins to decrease until it reaches near zero. Although a local phenomenon, this 
condition causes many accidents each year. 

ice nucleants must have the highest 
possible threshold-activation temper- 
ature. Dry-ice pellets and liquefied 
propane, carbon dioxide, and freon 
have typically been chosen to meet 
this condition. Silver iodide is not 
expected to be effective above — 5 
centigrade. Consideration should be 
given to the use of certain organic 
nucleants such as urea and phloro- 
glucinal, which have been reported 
to have relatively high activation 

Dispensing Methods — To be effec- 
tive, the nucleants must be distrib- 
uted fairly uniformly through the 
volume of fog to be modified. The 
earliest, and still the most effective, 
procedure is to distribute dry-ice pel- 
ets from aircraft flying above the fog; 
vertical distribution is assured by the 
rapid fall of the pellets through the 
fog. Nucleants in the form of fine 
particles or liquefied gases must be 
introduced directly into the fog, 
which may involve hazardous flight 
levels. The costs of aircraft seeding 
and the limited storage life of dry 
ice have led to the development of 
ground-based dispensers. Liquefied 
refrigerant gases are commonly used, 
often with fans or blowers to dis- 
tribute the resulting ice crystals 
through the fog. 

Fog is almost always accompanied 
by a wind drift, and the location 
and timing of the seeding operation 
must be selected so that the clearing 
moves over the airport at the desired 
time. This requires timely wind ob- 
servations, precise navigation for 
airborne seeding, or extensive arrays 
of fixed seeding dispensers. A wind 
shift during the operation may cause 
the clearing to miss the airport. 

Cost Considerations — Operational 
successes in the clearing of cold fog 


have been reported by the U.S. Air 
Force in West Germany and Alaska, 
by Orly Airport in Paris, and at 
several commercial airports in north- 
western United States. Cold fog at 
most American airports is so infre- 
quent, however, that the standby cost 
of a cold-fog modification system 
probably cannot be justified. (It 
should be noted that the ice fogs 
that form in cold regions such as 
Alaska cannot be modified by seeding 
with ice nucleants.) 

Warm Fog 

Warm fog is much more common 
than cold fog. Many methods have 
been proposed over the years for 
modifying warm fog, but those that 
have shown significant success all in- 
volve the evaporation of the fog 
drops. The evaporation may be 
achieved by heating the air, by dis- 
tributing hygroscopic particles in the 
fog, or by forcibly mixing the fog 
with the drier and/or warmer air 
above the fog layer. 

Heating was employed at military 
airfields in England during World 
War II with considerable operational 
success. This so-called FIDO (Fog In- 
vestigation and Dispersal Operation) 
method was further developed at Ar- 
eata, California, after the war, and an 
operational system was installed at 
Los Angeles Airport. Moderate suc- 
cess was claimed, but the method 
was abandoned because of the large 
amounts of fuel required and the 
psychological and safety hazards of 
operating aircraft between two lines 
of flames. 

The fundamental unsolved prob- 
lem of thermal-fog modification is 
the uniform distribution of heating 
throughout the fog. In a typical fog, 
heating sufficient to raise the air 
temperature by about 1° centigrade 
will cause the fog to evaporate in a 
short time. Arrays of point heat 
sources, particularly linear arrays, can 
be expected to lead to convection, 

non-uniform heating, escape of heated 
air aloft, and horizontal convergence 
of fog near the surface. The U.S. Air 
Force has had some success using jet 
aircraft on either side of a runway 
as heat sources. Further engineering 
developments aimed at providing 
reasonably uniform heating by means 
of blower-heaters specifically de- 
signed for the task may be worth- 
while in view of the basic attractive- 
ness of thermal-fog modification. 

Hygroscopic particles introduced 
into fog grow by condensation, 
thereby reducing the relative hu- 
midity and leading to the evaporation 
of the fog drops. This transfer of 
the liquid water to a small number 
of larger solution droplets leads to 
an improvement in visibility in the 
fog. More complete clearing occurs 
as the solution droplets fall out under 
the action of gravity. Hygroscopic 
particles act something like ice crys- 
tals in a cold fog, with the important 
difference that the equilibrium vapor- 
pressure over the solution droplets 
rises rapidly as the droplet is diluted, 
approaching that of pure water. 

To minimize the total quantity of 
hygroscopic material required to 
modify a fog, the hygroscopic parti- 
cles should be as small as possible, 
consistent with the requirements that 
they be large compared to the fog 
drops and that they fall out of the 
fog in a reasonable time. Since the 
solution droplets become diluted as 
they fall, the deeper the fog the 
larger must be the initial size of the 
hygroscopic particles. When the 
depth of the fog is more than a few 
hundred meters, accretion of the fog 
drops by the solution becomes an im- 
portant mechanism in the lower por- 
tion of the fog. 

Mathematical models of the modi- 
fication of warm fog by hygroscopic 
particles have been devised and used 
to guide field experiments. The the- 
ory of the growth of hygroscopic 
particles and the evaporation of fog 
drops is well established. Reasonably 

adequate information is available on 
the drop-size spectra and liquid 
water content of natural fogs. Tur- 
bulent diffusion is arbitrarily intro- 
duced on the basis of a few estimates 
of the eddy-diffusion coefficient in 
fogs. However, these mathematical 
models are static in that they do not 
model the natural processes that form 
and dissipate fog. Dynamical models 
must be developed that incorporate 
these processes. Among other ad- 
vantages, such models should yield 
the characteristic time of the fog- 
formation process. It seems evident 
that any artificial modification must 
be accomplished in a time that is 
short compared to this characteristic 
time of fog formation. This is of 
the utmost importance in the design 
of fog-modification experiments. 

In field experiments, hygroscopic 
particles have been released from 
aircraft flying above the fog. The 
usual assumption that the trailing 
vortices uniformlv distribute the par- 
ticles in the horizontal is highly 
questionable. Failure to achieve uni- 
form distribution of the seeding par- 
ticles is probably one of the principal 
causes of unsatisfactory modification 
experiments. A non-uniform distrib- 
ution can be countered only by in- 
creasing the total amount released to 
insure that there is an adequate con- 
centration everywhere. A closely re- 
lated problem is the marked tendency 
of the carefully sized hygroscopic 
particles to emerge in clumps. Imag- 
inative engineering design is needed 
to solve these problems, and nothing 
is more important at the present time. 

Air Mixing — Mechanical mixing of 
the warmer and/or drier air above 
a relatively thin fog layer will usually 
cause the fog to evaporate. The U.S. 
Air Force has produced cleared lanes 
by utilizing the strong downwash 
from helicopters; this technique is 
effective only in shallow fogs, how- 
ever. The cost/effectiveness ratio is 
probably large, but it may be justified 
for certain military purposes when 
the helicopters are available. 




In summary, the modification of 
cold fogs with ice nucleants is an 
operational success, and further en- 
gineering improvements are to be 
expected; but there are only a few 
regions where the frequency of cold 
fogs is sufficiently high to justify the 

expense of a permanent installation. 
Warm-fog modification by heat or 
by seeding with hygroscopic particles 
is achievable in the relatively near 
future. The requirements for success 
are more adequate numerical models 
of fog and, most importantly, imagi- 
native engineering design so that the 
assumptions made in the experi- 

mental design can be realized in 
practice. However, it remains true 
today, as thirty years ago, that the 
total cost of warm-fog modification 
will be high enough to discourage its 
extensive application. Some recent 
benefit/cost figures are shown in Fig- 
ure VI-9. 


Air Cancellations 



Delays Avoided 





Benefit^ Cost 

Los Angeles* 

60 Hrs. 













Salt Lake City 



































Des Moines 







203 Hrs. 

*Cold fog; all other 

stations are warm fog case; 

The table lists the operational benefits versus costs experienced by United Airlines 
during the winter of 1969-70. Benefits have been calculated as monies that would 
have been spent were fog dispersal not available or unsuccessful. Cost of delays 
were computed from crew salaries, aircraft maintenance, and fuel and oil costs. 
Diversion costs included alternate ground transportation, meal and hotel costs, 
and overtime charges for ground personnel. Not included were intangibles or 
incomputables such as maintenance dislocation, ferrying equipment, need for 
reserve aircraft, and mispositioning of flight crews when flights were diverted. Also 
not included is the cost of customer inconvenience when fog disrupts operations. 
It is of interest to note that benefits were twice the costs of the program at Los 
Angeles even though fog was successfully dispersed in only 32% of the cases. 

Fog Dispersal Techniques 

To assess the present state of fog- 
dispersal techniques and define the 
work to be done, it is necessary to 
consider three types of fog. 

Ice Fog 

This type of fog is an increasing 
problem for aviation and other forms 
of transportation in a few high-alti- 
tude localities. Comparatively little 
research has been done to develop 
economical methods of combating ice 
fog. The only technique available at 

present is the brute-force method of 
applying heat to evaporate it. Fur- 
ther research is required to assist in 
the development of more efficient 
means of thinning or dispersing this 
type of fog. 

Supercooled (Cold) Fog 

In the contiguous United States, 
approximately 5 percent of the dense 
fogs that close airports to opera- 
tions are of the cold type. In more 
northerly latitudes, the percentage is 

higher during the winter. Other 
forms of transportation are equally 
affected when visibility drops below 
one-half mile, but the economic im- 
pact is probably not as great as it is 
on aviation. 

Dry-Ice Dispersing Techniques — 
Dispersal of cold fog by seeding 
crushed dry ice from light aircraft is 
an operational reality at approxi- 
mately a dozen airports in the United 
States. Some of these programs have 
been established each winter since 
1962. The physical changes that 


occur are well understood, stemming 
from the research of Schaefer, Lang- 
muir, and Vonnegut in 1946. Al- 
though the dry-ice technique is 
theoretically effective in converting 
supercooled water to ice crystals only 
at temperatures colder than —4° cen- 
tigrade, operational experience has 
demonstrated unequivocally that this 
technique is effective up to centi- 
grade through proper sizing of the 
dry-ice pellets and proper control of 
the seeding rates for the conditions 

This method of dispersing cold fog 
is about 80 percent effective. The 
failures that do occur are primarily 
related to operational problems such 
as miscalculating wind drift, which 
results in the cleared area moving off 
target. Occasionally, too, the tech- 
nique is stretched beyond the capa- 
bility of the physical reactions to take 
place, typically in supercooled fog 
decks whose upper layers are several 
degrees warmer than centigrade. 

Ground Dispensing Methods — Be- 
cause of such operational problems 
and the complex logistics that are 
required in dispersing an airport fog 
by means of aircraft, a ground dis- 
pensing system, which employs essen- 
tially the same physical principles, is 
more desirable. Liquid propane has 
been used effectively as the seeding 
agent; it has reached a degree of 
sophistication in France, where con- 
trol of supercooled fogs at Orly Air- 
port is completely automated through 
the use of seventy fixed dispenser 
heads deployed around the target 
area. Liquid propane has been used 
operationally to combat cold fogs in 
the United States, but, primarily for 
economic reasons, the technology has 
never been developed beyond the use 
of a few portable dispensing units. 

Researchers have suggested that 
liquid propane and other cryogenics, 
in addition to providing the cooling 
mechanism, also alter the fog drop- 
lets through a clathration process. 
Since this latter process may increase 
the effectiveness of liquid propane 

in fog temperatures several degrees 
warmer than 0° centigrade, further 
investigation is warranted. Many air- 
ports are subjected to dense winter 
fogs with characteristic temperatures 
slightly warmer than freezing. De- 
velopment of this clathration process 
would pay off in benefits at many 
airports that cannot support the 
more expensive warm-fog dispersal 

Warm Fog 

Since all but about 5 percent of 
the dense fog that closes airports and 
cripples other forms of transporta- 
tion in the populated latitudes is of 
the warm type, it would be expected 
that there has been some preoccupa- 
tion with measures to alleviate the 
warm-fog problem. Formal research 
into fog physics and development of 
laboratory techniques for dispersing 
fog have, however, been under way 
less than forty years. Out of desper- 
ation, some brute-force methods for 
evaporating fog have been under- 
taken where economics was not a 

Houghton's work at the Massachu- 
setts Institute of Technology in the 
1930's was the first formal research 
aimed at fog dispersal. A number 
of other studies on warm fog were 
subsequently undertaken by federal 
military and civilian agencies, but 
until the 1960's none of the fog- 
modification concepts was applied to 
routine commercial or military activi- 
ties. Economics, problems of logistics, 
or deleterious effects on the environ- 
ment were the deterrents. 

Modern Techniques — At least one 
installation of a refined thermal sys- 
tem for evaporating fog at a busy 
airport is planned for 1972. Other 
thermal methods that utilize energy 
more efficiently are under develop- 
ment. All of these systems are expen- 
sive and will probably be limited to 
application at major airports or other 
sites where the economic pressure of 
fog paralysis is high. 

For two years, warm fog has been 
regularly dispersed at a few U.S. air- 
ports through chemical seeding tech- 
niques that had been partially con- 
firmed by fog physics research and 
laboratory testing. This approach is 
feasible, and is producing economic 
benefits exceeding costs of the pro- 
grams by a factor of about 3 to 1; but 
it is considered in the developmental 
stage because aircraft dispensing is 
required. For full reliability and opti- 
mum benefit/cost ratios, a ground 
dispensing system must be developed 
that will use the most effective mate- 
rials. A number of promising con- 
cepts have been conceived and some 
have been laboratory tested. Further 
development work is required, but 
success will depend on better basic 
knowledge of fog makeup than we 
have today. 

Basic Warm-Fog Physics — Suffi- 
cient knowledge of fog physics exists 
to disperse warm fog with heat. The 
more attractive and economically 
feasible approaches to warm-fog dis- 
persal, which do not employ heat, 
require more basic physical knowl- 
edge in order to develop the most 
efficient system. 

Recent research involving the use 
of hygroscopic materials as seeding 
agents has provided some much- 
needed knowledge about fog, but 
there are still some baffling blind 
spots. This new knowledge came fif- 
teen years after successful feasibility 
tests were conducted, using the same 
principle, but which were not con- 
tinued because of logistic problems. 
It is hoped that another long delay 
will not develop before we can ex- 
plain, for example, why polymers, 
surfactants, and other substances, 
when diffused properly, produce posi- 
tive results, apparently through a 
strong ionization process. Supersatu- 
rated solutions of nontoxic materials 
with endothermic properties, and 
the electrogas-dynamic principle, are 
promising dispersal materials and 
techniques which require further de- 
velopment, as does research on the 
physics of fog. 




Monsoon Variations and Climate and Weather Forecasting 

The monsoon area extends from 
western Africa to northeastern Aus- 
tralia, being bounded to the north by 
the great mountain ranges of southern 
Asia. In the southern hemisphere it 
encompasses southeastern Africa and 
northern Australia but does not ex- 
tend beyond the equator over the 
central Indian Ocean. (See Figure 
VI-10) Its peculiarity, distinguishing 
the area from all others, is the marked 
difference in prevailing surface wind 
directions between winter and sum- 
mer. Winds blow predominantly from 
continent to sea in winter and from 
sea to continent in summer. 

Thus, in general, since moist air 
covers the continents in summer and 
dry air in winter, the summers are 
usually wet and the winters dry. 
Over the northern hemisphere this 
pattern is significantly distorted by 

the huge, elevated mass of Himalaya- 
Tibet which, through its thermo- 
mechanical effect on the atmospheric 
circulation, supports the vast perma- 
nent deserts east of 70° E. longitude, 
insures that India, Burma, and Thai- 
land experience arid winters and very 
wet summers, and keeps China rela- 
tively cloudy and moist throughout 
the year. 

Except for destructive winds asso- 
ciated with relatively rare tropical 
cyclones in the China Seas and Bay 
of Bengal, the attention of meteor- 
ologists in the monsoon area is 
focused on only one phenomenon — 
rain. Accurate long-range forecasts 
for agricultural planning, or short- 
range forecasts for irrigation or of 
floods would be invaluable to the 
economy of every country in the area. 
But rainfall variability on every time- 


160° 140° 120° 100° 80° 60° 

40° 20° W 0° E 20° 40° 


80° 100° 120° 140° 160° 180° 











1 1 1 1 1 1 


1 1 1 1 


1 1 1 1^ 1 


■ — ■— . ^^^ -OX * 

1 1 1 1 1 1 

\ ) r7 r \ ** 




i i i i 


1 1 1 I 1 

? - 

160° 140' 120° 100° 80° 60° 

40° 20° W 0°E 20° 40° 


80° 100° 120° 140° 160° 180° 

The map delineates the regions of the world that are monsoonal — i.e., where the 
prevailing wind direction shifts at least 120 degrees between January and July. By 
sharpening the definition according to principles developed by Ramage, it is possible 
to define the true monsoon area as that included in the rectangle shown covering 
large parts of Asia and Africa. 

and space-scale — from inter-annual 
to diurnal and from intercontinental 
to mountain/valley — render clima- 
tology of limited use in providing the 
necessary planning information. 

Status of Tropical Meteorology 

Long-Range Forecasting — Most 
existing work was done in India and 
Indonesia before World War II. Mul- 
tiple-regression equations based on 
lag correlations were first used in 
the lQ20's to forecast seasonal rain- 
fall. Unfortunately, performance was 
disappointing — droughts and floods 
were never anticipated and predictor/ 
predictand correlations proved to be 
most unstable. Apart from a modest 
continuing search in India for new 
correlations, little effort is now being 

Unless the deterministic forecast 
methods to be tested in the Glo- 
bal Atmospheric Research Program 
(GARP) perform much better than 
even their most optimistic proponent 
expects, there is little chance of use- 
ful developments in forecasting sea- 
sonal rainfall extremes. 

Short-Range Forecasting — For the 
past fifty years the practice of tropical 
meteorology has been distorted (usu- 
ally unfavorably) by uncritical graft- 
ing of hypotheses and techniques 
developed in middle latitudes. As one 
scientist has observed: 

We have again and again ob- 
served very reputable and highly 
specialized meteorologists from 
higher latitudes who were deter- 
mined to solve the problems of 
tropical meteorology in a very 
short time by application of mod- 
ern scientific methods and use of 
new scientific resources such as 


computerization. Then, after a few 
years, they find out that the thing 
doesn't quite work this way and 
the tropics cannot be approached 
by the methods used to solve prob- 
lems in higher latitudes. 

Training of Tropical Meteorolo- 
gists — Almost e'very professional me- 
teorologist in Burma and Thailand 
holds an advanced degree in meteor- 
ology from a foreign university, and 
yet their contributions to knowledge 
of even their own country's meteor- 
ology has been miniscule. In part, 
this is because many monsoon-area 
meteorologists have received inten- 
sive training in other countries, espe- 
cially in the United States and the 
United Kingdom, but almost never by 
teachers with any experience in, or 
appreciation of, monsoon meteor- 
ology. In this country, even the 
tropical meteorologists who instructed 
them, confidently and quite unjustifi- 
ably, would extrapolate their tropical 
oceanic experience to the continents. 

Numerical forecasting is the latest 
invader from the higher latitudes. 
Since some of the training received in 
other countries is at last beginning to 
seem relevant, everyone with access 
to a computer is trying out the 
models. Despite the fact that none 
of the models has demonstrated any 
weather forecasting skill over the 
Caribbean and around Hawaii, and 
despite the fact that problems of 
grid-mesh size are even more critical 
over the continents than over the 
oceans, resources which can ill be 
spared are being squandered on the 
latest fad — on the unsupported and 
unjustified assumption that numeri- 
cal forecast techniques have already 
significantly improved on subjective 
analysis and forecasting in the tropics. 
The machine churns out reams of 
charts — while professional meteor- 
ologist positions remain unfilled. 

In the monsoon area, the best aid 
to local forecasting is the cloud pic- 
ture from an Automatic Picture 
Transmission (APT) satellite. But the 
only way to use this information 

intelligently is through hard, subjec- 
tive evaluation, and this is so un- 
fashionable that a computer is often 
considered more desirable than an 
APT read-out station. A monsoon- 
area meteorologist, after intelligently 
and deliberately studying a detailed 
climatology and a sequence of care- 
fully analyzed synoptic and auxiliary 
charts (including APT pictures), can 
forecast consistently better than 
chance and significantly better than 
a numerical model. A statistical pre- 
diction should always be available to 
him. He should modify that predic- 
tion only when he discovers a sig- 
nificant change trend in the charts. 
When in doubt, stay with statistics. 
This may seem obvious, but such 
down-to-earth advice is rarely given 
during academic instruction. 

Training Facilities in the Tropics — 
If training in middle-latitude institu- 
tions is so inadequate, what about 
indigenous programs? 

In Asia, the Royal Observatory, 
Hong Kong, is a good but small cen- 
ter of research, emphasizing urban 
pollution and hydrological planning. 
Useful, practical, and theoretical 
studies are being pursued in the 
People's Republic of China. The In- 
stitute of Tropical Meteorology in 
Poona, India, is conducting good cli- 
matological studies but is also un- 
critically applying numerical forecast 
models developed in Washington, 
D.C., and Honolulu. The program in 
the University of the Philippines, 
launched with some fanfare three 
years ago, has apparently made no 
progress — an expensive faculty waits 
for enrollments but is ignored by 
meteorological services in the region. 
The Department of Geography in the 
National University, Taipei (Taiwan), 
has done good work, particularly on 
the effects of typhoons, while the 
Department of Oceanography in the 
University of Malaya (Kuala Lumpur) 
has made a promising beginning with 
useful climatological and synoptic 

Apart from a small department of 
meteorology in the University of 

Nairobi, in Kenya (which has turned 
out at least one promising scientist), 
and, possibly, some activity at the 
University of Ibadan, in Nigeria, 
nothing much seems to be happening 
in Africa. Australia largely neglects 
monsoon meteorology except for a 
small in-house effort in the Regional 
Meteorological Center, Darwin. 

Over-all, the U.S. military interest 
in southeast Asia has contributed 
more to meteorological research and 
to improvement in meteorological 
training in the monsoon area over 
the past five years than any other 
factor. Research conferences spon- 
sored by defense agencies have pro- 
duced significantly more than just 
military benefits. One spin-off was 
the World Meteorological Organiza- 
tion training seminar conducted in 
December 1970, in Singapore. 

Summing up, short-range monsoon 
weather forecasting can be improved, 
but there is little chance of improve- 
ment stemming from the BOMEX ex- 
periment in the Atlantic (see Figure 
VI-11) or from continued training of 
monsoon-area meteorologists in insti- 
tutions with little understanding of, 
or interest in, the peculiar problems 
of monsoon weather. More can prob- 
ably be done by supporting the efforts 
in Taipei, Hong Kong, Kuala Lumpur, 
and Nairobi, particularly in the direc- 
tion of temporarily assigning out- 
side experts (perhaps on sabbatical 
leaves) to these places. The experts 
might even learn something from the 

Scientific Communication — One 
other serious problem is that research 
into monsoon meteorology is pro- 
vincial. Investigators have seldom 
been aware that in other monsoon 
regions similar problems have been 
under study or even solved. Insuffi- 
cient scientific communication partly 
accounts for this. The only widely 
distributed journals are published in 
middle latitudes. Regional journals 
or research reports are often well dis- 
tributed beyond the monsoon area 
but poorly distributed within it. 




60,000 Ft 



N0RTH ■ ^ - 


500 W /j GILL 




' v 

t ... Z/T" 

SO Ft 



R0CKAWAY ^ _^ 



1 Array will be oriented at right 
angles to Mean Trade Wind 





,*U. . 






™*aBiB^ tug 




• Land Based Station O Current Stations 

Thermistor Array Moorings 

The deployment of instrument platforms for BOMEX is shown in the diagram. This 
figure represents the consequence of designing a group of experiments of sufficient 
scope and precision to test hypotheses and obtain useful new data from an 
intermediate-scale system. The event is unique in human history. This experiment 
was participated in by the Departments of Commerce, Defense, Interior, State, and 
Transportation, the National Aeronautics and Space Administration, Atomic Energy 
Commission, the National Science Foundation, National Center for Atmospheric 
Research, and more than 10 universities. 

Basic Concepts 

The general character of the mon- 
soons and their inter-regional varia- 
tions reflect the juxtaposition of con- 
tinents and oceans and the presence 
or absence of upvvelling. However, 
without the great mechanical and 
thermal distortions produced by the 
Himalayas and the Tibetan Plateau, 
the vast northern-hemisphere deserts 
would be less desert-like, central 
China would be much drier and no 
colder in winter than India, while 

even over the Coral Sea winter cloud 
and rain would be uncommon. 

Within the monsoon area, annual 
variations are seldom spatially or 
temporally in phase. Even if these 
variations were understood and their 
phases successfully forecast, accurate 
day-to-day weather prediction would 
not necessarily be achieved, for the 
climatological cycles merely deter- 
mine necessary conditions for certain 
weather regimes; synoptic changes 
then control where and when the 

rain will fall, and how heavily, and 
whether winds will be destructive. 

Synoptic-Scale Changes — Al- 
though not new, a most important 
concept is that of wide-ranging, 
nearly simultaneous accelerations or 
decelerations within a major vertical 
circulation. Causes are elusive, al- 
though the changes generally appear 
to be triggered by prior changes in 
the heat-sink regions of the vertical 
circulation. This is a field of truly 
enormous potential for numerical 
modeling, on a time-scale between 
synoptic and seasonal, in which fluc- 
tuations in radiation and in air- 
surface energy exchange might pro- 
duce profound effects. 

The concept both explains previous 
difficulty in maintaining continuity 
of synoptic analysis and demands 
that notions of day-to-day weather 
changes be examined and probably 
modified. Even during winter, fronts 
seldom remain material boundaries 
for long and air-mass analysis con- 
fuses more often than not. 

That synoptic-scale disturbances 
often appear to develop and to 
weaken in response to changes in the 
major vertical circulations might ex- 
plain why many of the disturb- 
ances are quasi-stationary. In turn, 
synoptic-scale vertical motion deter- 
mines the character of convection and 
the efficiency with which energy is 
transported upward from the heat 

Synoptic-scale lifting, by spreading 
moisture deeply through the tropo- 
sphere, reduces the lapse rate and 
increases the heat content in mid- 
troposphere. Thus, though it dimin- 
ishes the intensity of small-scale 
convection and the frequency of thun- 
derstorms, it increases rainfall and 
upward heat transport. Conversely, 
synoptic-scale sinking, by drying the 
mid-troposphere, creates a heat mini- 
mum there, hinders upward transport 
of heat, and diminishes rainfall. How- 
ever, the increased lapse rate favors 
scattered, intense small-scale convec- 
tion and thunderstorms. 


In the monsoon area, the character 
of the weather, on the scale of indi- 
vidual clouds, seems to be determined 
by changes occurring successively on 
the macro- and synoptic scales. Rains 
set in — not when cumulonimbus 
gradually merge but when a synoptic 
disturbance develops, perhaps in re- 
sponse to change in a major vertical 
circulation. Showers, too, are part of 
the synoptic cycle. Individually in- 
tense, but collectively less wet, they 
succeed or precede rains as general 
upward motion diminishes. 

When synoptic-scale lifting is com- 
bined with very efficient upper-tropo- 
spheric heat disposal, the lapse rate 
may be steep enough to support in- 
tense convection. Then, a vast, "con- 
tinuous" thunderstorm gives pro- 
longed torrential rain. Many times 
this takes place within the common 
upward branch of two major vertical 

Needed Scientific Activity 

Many tropical meteorologists have 

striven to make their work appear as 
quantitative and objective as possible. 
This commendable aim has led to im- 
portant climatological insights. How- 
ever, in synoptic studies their quanti- 
tative results have usually been belied 
by nature's quantities. A numerical 
model which determines that air is 
massively rising over the deserts of 
Arabia has limited validity no matter 
how quantitative and objective it 
might be. Energy-budget computa- 
tions in which precipitation and 
evaporation are residuals, or must be 
estimated, have also had their day. 

Forecasting and research should be 
inseparable. The very few monsoon- 
area weather services that enable 
their forecast meteorologists to spend 
at least one-third of their time on 
research have thereby greatly en- 
hanced staff morale and their scien- 
tific reputations, to say nothing of im- 
proved forecast accuracy. Combined 
forecast-research programs could well 
be successfully directed to solving 
problems and to increasing the num- 
ber of recognizable models of synop- 
tic circulations. 

The area covered by synoptic analy- 
ses should be sufficiently broad for 
the major vertical circulations to be 
monitored. Then interaction with 
synoptic disturbances and consequent 
effects on rainfall could be detected 
and possibly anticipated. 

Mesoscale gradients within synop- 
tic systems and their diurnal varia- 
tions might be better understood 
were studies to combine information 
from weather radars and weather 
satellites. Ceraunograms could help 
bridge the gap between meso- and 
synoptic scales. Aerial probing of con- 
tinuous thunderstorms would likely 
illuminate the shadowy picture we 
now have of energy transformations. 

We should view the future of mon- 
soon meteorology with optimistic dis- 
content. Regional progress in under- 
standing and forecasting weather has 
been disappointingly slow. However, 
attacks are being vigorously pressed 
on problems of concern to the entire 
monsoon area. What is needed is the 
hitra-nrca exchange of people and 

Tropical Meteorology, with Special Reference to Equatorial Dry Zones 

The outlook for meteorological ob- 
servations in the tropics, as now 
programmed, is excellent for many 
purposes and far superior to the past. 
Much can be done with existing and 
prospective observations in the way 
of field experimentation and synoptic- 
statistical modeling. Ambitious proj- 
ects like special or worldwide net- 
works or expeditions, however, should 
be undertaken only if the necessary 
data base is really assured. Further- 
more, meteorology, as a discipline, is 
still far too self-contained; special 
efforts are needed to promote inter- 
disciplinary research. 

Four problems are particularly in 
need of concentrated research in tropi- 
cal meteorology during the 1970's: 

Water Supply — This age-old prob- 
lem is becoming aggravated by popu- 
lation increases in tropical countries, 
as elsewhere. The need is to find 
ways to assure an adequate water 
supply over the middle and long 
term — i.e., on a seasonal or annual 
basis. Several avenues of scientific 
development could be promising: 
First, it has become more than ever 
urgent to improve weather-prediction 
methods. Second, experiments for in- 
creasing precipitation artificially need 
to be broadened to see whether 
(a) such increases are possible at all 
on tropical land areas; and (b) enough 
water can be produced by man to 
make a significant difference. While 
not directly a part of meteorology, 
desalination of sea water and diver- 

sion of large rivers (e.g., part of the 
Amazonas in northeast Brazil) also 
offer possibilities for enhancing tropi- 
cal water supplies. 

Tropical Storms — Again, the prob- 
lem has both predictive and modifica- 
tion aspects. Prediction beyond 12 to 
24 hours remains a large problem in 
the areas affected by tropical storms 
and hurricanes. The German Atlantic 
Expedition of 1969 has again raised 
the question of whether tropical 
storms can be "modified" and, in- 
deed, whether or not it would be wise 
to do so. It should not be overlooked 
that tropical storms in many situa- 
tions and many areas bring great eco- 
nomic benefits, even though news re- 



leases usually cite only the damage 
they cause. 

Role of the Tropics in the General 
Circulation — The role of the tropics 
in hemispheric or global circulation 
is known to be important. The long- 
term (five days or more) prediction 
models now being developed require 
a tropical data input that is not vet 
adequate. In particular, studies are 
needed of (a) how constant the 
tropics are as a source of energy and 
momentum, and (b) the appropriate 
way to include in the models the 
energy infusions into the atmosphere 
from more or less point sources ■ — 
i.e., from small-area cumulonimbus 
cloud systems. In addition, ways 
must be found to represent surface 
interface processes, not only evapo- 
ration and sensible heat transport, 
but also momentum exchange, espe- 
cially on the equator itself. 

Hemispheric Interchange — Any 
interconnection between the extra- 
tropical regions of both hemispheres 
must take place via the tropics. From 
the spread of various tracers or aero- 
sols, over a scale of weeks or months, 
we know that an exchange of air does 
take place. An understanding of 
these exchanges is particularly rele- 
vant to problems of air pollution. 
Actual pollution problems in the 
tropics are not likely to become 
severe because of the unstable strati- 
fication, although tropical countries 
are, of course, exposed to sedimenta- 
tion or washout of pollutants by pre- 
cipitation. In the longer view, how- 
ever, the ability of the air to transport 
pollutants across the equator requires 
serious study of the air exchange — 
its mobility, the magnitude of the 
exchange, preferred paths, and the 
like, with a view to eventual control 
of such transports. 

Needed scientific activity under 
each of these major categories is dis- 
cussed in the sections that follow. 

Tropical Water Supply 

Some parts of the tropics are sub- 
ject to recurrent, severe droughts. 

Those of northeast Brazil, which have 
led to large out-migrations of popu- 
lation and great economic and politi- 
cal instability in all of Brazil, are an 
example. These droughts, superim- 
posed on an average rainfall that is 
itself marginal for a tropical economy, 
have lasted from two to as long as 
nine consecutive years. The longer 
droughts have occurred in the more 
recent part of the climatological rec- 
ords, suggesting a secular drying out 
— a most unfavorable circumstance. 

Many scientific questions remain 
before such tropical droughts can be 
understood, much less controlled. 
The droughts (as well as the inter- 
mittent heavy rainfall years) must be 
related somehow to the anomalies of 
the northern- and southern-hemi- 
sphere general circulations and pos- 
sibly to some oceanic temperature 
distributions that do not follow di- 
rectly from anomalous surface trade- 
wind speeds and directions, as well 
as to associated surface divergences 
or convergences. But the controls 
that govern these relationships are 
completely unknown at present. Lack 
of data over the tropics, the southern 
oceans, and even the North Atlantic 
at lower latitudes has prevented any 
definitive study. Little adequate use 
has been made of such information as 
is available — surface temperature, 
pressure, and precipitation anomalies 
over wide areas, as well as recent 
findings on wind anomalies over the 
equatorial Atlantic. 

Data Base — New data are accumu- 
lating very fast for all parts of the 
tropics, eliminating the old excuse 
that lack of observations prevents 
progress. Data have been accumu- 
lating from the rapidly growing num- 
ber of commercial flights over tropi- 
cal areas. Programmed new satellite 
data are adding even more rapidly to 
the pile. An energetic attack on the 
discovery of the controls of equatorial 
dry zones and variable rainy seasons 
should be possible in the 1970's as a 
result of these new data. Once the 
controls are known, it will be possible 
to see whether prediction of the con- 

trol functions can be achieved with 
synoptic-statistical modeling tech- 
niques, although direct deterministic 
prediction does not appear in the 
picture for the foreseeable future. 

Cloud Modification — The ques- 
tion of cloud-modification potential 
in the tropics remains unresolved. 
Nonprecipitating cumulus congestus 
may be a preferred cloud form over 
many semi-arid tropical areas. But 
past efforts to study the possibilities 
of modifying such clouds have been 
rather sporadic. Early interest in Aus- 
tralia has lagged. A few serious cu- 
mulonimbus studies have been made 
in the Caribbean, but these relate 
to the atmosphere over open sea; 
since surface heat sources are much 
stronger over land, these oceanic ex- 
periments cannot be applied directly 
to the tropics, although they may be 
useful indirectly if they are successful 
in making cumulonimbus grow. 

Quite apart from modification ex- 
periments, it would be of value 
merely to learn the cloud composition 
at different locations in order to 
assess what might be termed the 
"stimulation potential." Even in this 
respect, knowledge has remained de- 
ficient. There exists on this subject a 
great need not only for scientists but 
also for adequate instrumentation 
(notably radar) and good technicians. 
Good radar technicians actually avail- 
able for meteorology are rare, and in 
tropical countries they tend to be 
either nonexistent or insufficiently 
skilled. The World Meteorological 
Organization has a large technician- 
training program, which merits sup- 

Tropical Storms 

Tropical storms are notoriously 
variable in frequency from year to 
year and region to region. (See Figure 
VI-12) Sometimes the connection with 
the general circulation is obvious, but 
not always. The role of hurricanes in 
the general circulation is not yet fully 
determined, and general-circulation 
research, with a focus on general cir- 



North Atlantic Ocean 73 

North Pacific — off west coast of Mexico _. 57 

North Pacific Ocean, west of 170°E ..... . 211 

North Indian Ocean, Bay of Bengal 60 

North Indian Ocean, Arabian Sea 15 

South Indian Ocean, west of 90°E 61 

South Indian Ocean, northwestern Australia 9 

The table shows the frequency of tropical storms per 10 years. The numbers are 
only estimates of the number of tropical cyclones to be expected, since, until 
recently, there have been no reliable statistics except for the Atlantic, where ship 
traffic has been heavy and island stations numerous for many years. Surveillance 
by satellites will provide worldwide coverage of tropical cyclones. 

dilations favorable or unfavorable to 
tropical storms, is definitely needed. 
Clearly, such storms are not mere 
nuisances. A single hurricane can re- 
place the function of the equatorial 
trough zone in the Atlantic for verti- 
cal transport of heat and moisture 
and their transmission to higher lati- 

Altogether, the true value of such 
storms — when, where, and under 

what circumstances needs to be 

stressed and measured. Coastal dam- 
age and associated flooding from hur- 
ricanes in areas such as southeastern 
United States usually receive the 
widest publicity. It is forgotten that, 
as these storms move slowly inland 
and turn into unspectacular inland 
rains, they have on occasion saved 
the cotton crop and even relieved 
water shortages of cities such as At- 
lanta. Lowered water tables over 
southern Florida and other areas, with 
their danger of salt-water intrusion 
into the water supplies of cities like 
Miami, can also be counteracted by 
hurricane precipitation. In terms of 
dollars, then, hurricanes can often 
bring benefits that are comparable to 
the damage they cause. 

Impact of the Tropics 
on World Weather 

Long-Period Trends — As the en- 
ergy and momentum source for the 

general circulation, the tropics are 
most likely to have an important im- 
pact over long time-scales (from 
months to years). The excess of 
energy acquired and held by the 
tropical oceans may undergo slow 
variations of possibly great impor- 
tance for long-period circulation 
anomalies. Bjerknes, for example, 
has speculated on the equatorial Pa- 
cific and its influence over large areas 
beyond the tropics. 

Expanded observational networks 
at sea and, again, satellite data now 
appear sufficient for empirical re- 
searches to begin on such aspects 
of general circulation. Theoretical 
modeling would also be useful to in- 
dicate how much variation in the 
tropics is needed to produce an even- 
tual circulation upheaval elsewhere. 
From models that have been run so 
far, it appears that the heat accumu- 
lations or deficits need not be very 

The intensity of the mean merid- 
ional circulation is also a matter for 
serious study. Data are marginally 
sufficient to calculate this circulation 
on a monthly, if not weekly, basis. 
Variations in the cell have hardly 
been considered at all; yet they would 
profoundly affect, among other 
things, the energy and momentum bal- 
ance picture, subtropical jet streams, 
stress in higher latitudes on the 

ground, and relations to the intensity 
of the Siberian winter high. 

Short-Period Fluctuations — Vari- 
able exchanges with the tropics may 
be responsible for the "index cycle" 
of the general circulation in the west- 
erlies on a two- or three-week scale. 
Prediction experiments now planned 
in connection with the Global At- 
mospheric Research Program (GARP) 
may or may not lead to an under- 
standing of such influences. Sepa- 
rate studies — using diagnostic data 
from the National Maritime Commis- 
sion and other hemisphere analyses 
and data storages — would also be of 
considerable value. Such studies 
could also investigate whether the 
exchanges are forced from higher 
latitudes, and in this way learn more 
about the mechanisms for the vari- 
ability of the atmospheric machine. 

For prediction equations, much em- 
phasis has been given to parameter- 
ization of cumulonimbus convection, 
since a few thousand cumulonimbus 
cover roughly 0.1 percent of the 
tropics at any one time. Much re- 
search on this subject is under way, 
although some dispute remains as to 
the form the research should take. 
GARP takes the view that a master 
tropical experiment must be con- 
ducted for final clarification. While 
a series of smaller projects might be 
inadequate for the problems to be 
solved, the master experiment may 
not succeed either, since experimental 
difficulty increases nonlinearly with 
the size of an experiment. Further- 
more, there is a deplorable tendency 
to ignore the results of past expedi- 
tions in writing the prospectus for 
new ones; in present planning, for 
example, such large undertakings as 
the German Atlantic Expedition and 
its results have been generally over- 

Emphasis should not be placed ex- 
clusively on oceanic observations. 
Obviously, the oceans hold much of 
the key to world weather; but pre- 
dictive models should eventually be 
geared mostly for continental areas, 



where predictions are most needed. 
Continental models will necessarily 
differ from oceanic ones. Continents 
do not have surface-heat storage in 
the sense of the oceans, and frictional 
stresses as well as nuclei spectra for 
condensation and freezing are differ- 
ent. Oceanic research could thus use- 
fully be supplemented by research 
over land. Collaboration with exist- 
ing continental experiments, such as 
that of northeastern Brazil, could 
bring large technical rewards. 

Interhemispheric Communication 

In many ways, it appears that the 
center of the equatorial convergence 
zone separates the hemispheres mete- 
orologically as well as physically. 
Each has a self-contained energy and 
momentum budget, for example. If 
this picture were true for all time- 
scales, then the two hemispheres 

could be treated as independent of 
each other for all practical meteoro- 
logical purposes. 

No one really believes this, how- 
ever, although there is much doubt 
as to the time-scale on which inter- 
hemispheric mechanisms are impor- 
tant. Preliminary calculations based 
on data dating from the International 
Geophysical Year (IGY), in the 
1050's, have not revealed any impor- 
tant connections; but then, the tropi- 
cal network of IGY was so deficient 
that it is impossible to treat these 
data as definitive. Here we see the 
danger of inadequate observational 
efforts. Better data are likely to 
emerge from superpressure balloons, 
World Weather Watch stations and 
satellites, and the buoys and other in- 
stallations of the GARP network. If 
these networks and data sources are 
kept up and expanded, a good start 
could be made during the 1^70's on 

resolving the questions relevant to 
the importance of interhemispheric 
communication for long-range 
weather changes. 

Irrespective of long-period weather 
control, an understanding of mass ex- 
changes across the equator is impor- 
tant to the prospects for worldwide 
pollution control. We know that mass 
exchanges across the equator occur, 
but we need to determine whether the 
drift of pollutants across the equator 
occurs with indifferent distribution in 
troposphere and stratosphere. If that 
is the case, nothing can be done to 
protect one hemisphere from the 
other, but there may be point-, or 
small-area, injections in preferred and 
stationary locations. If that is so, 
trajectory calculations toward these 
areas and measurements along them 
would at least permit warning of im- 
pending transports of particular pol- 
lutants at a high level. 


5. DUST 

African Dust and its Transport into the Western Hemisphere 

Meteorologists have recently dis- 
covered that enormous quantities of 
dust are raised over arid and semi- 
arid regions of North Africa and in- 
jected into the trade winds over the 
North Atlantic. Outbreaks of dust 
from the Sahara take about one week 
to reach the Caribbean. The amounts 
of dust are highly variable in space 
and time, both from day to day and 
season to season, but the period of 
maximum dust transport across the 
Atlantic (June to early September) 
coincides with the Atlantic hurricane 
season. Dust outbreaks from Africa 
often appear on meteorological satel- 
lite photographs as a semi-transpar- 
ent or transparent whiteness that re- 
sembles thin cirrus clouds. (See 
Figure VI-13) In such outbreaks, 
surface visibility can be moderately 
reduced as far west as the Caribbean. 

African dust outbreaks and the 
hurricanes that also have their origin 
over Africa may be interrelated in 

some ways. While it is highly un- 
likely that African dust can cause 
wind disturbances to form into hur- 
ricanes or hurricanes to dissipate, 
there is enough observational and 
theoretical evidence to suggest that 
the two phenomena might affect each 
other indirectly or directly in a sec- 
ondary role. The dust's ability to 
directly influence hurricanes lies in its 
ability to affect the thermodynamics 
of cloud growth through its role as 
an ice or condensation nucleator. 
More indirectly, the dust can affect 
the energy balance of the tropics by 
its ability to block incoming radiation 
from the sun or outgoing infrared 
radiation from the earth's surface. 

Dust can also serve as a tracer of 
atmospheric air motion. There is 
some evidence that an enhanced dust 
transport accompanies the movement 
of wind disturbances off the west 
coast of Africa. The dust content of 
the air can be modified in the disturb- 


This satellite photograph was taken by the ATS-3 satellite on the afternoon of August 
11, 1970. It shows a great cloud of African dust between 30° and 60° W. longitude 
just north of the Tropic of Cancer. 

ance either by being washed out in 
rain or by being evacuated to very 
high altitudes in the updrafts that 
accompany giant cumulus clouds. 
When it is transported to levels well 
above the 3- to 4-kilometer depth 
over which it is normally found, the 
dust can more readily affect the en- 
ergy balance and particulate concen- 
trations in other parts of the globe. 

Characteristics of Dust Transport 

Since 1965, quantitative measure- 
ments of windborne dust transport 
have been made on a year-round 
basis at a tower on the island of Bar- 
bados, in the lower Antilles. (Re- 
cently, two more such stations have 
been set up to measure dust in Ber- 
muda and Miami.) These measure- 
ments, made by scientists from the 
University of Miami, show that the 
airborne dust loading is highly vari- 
able from day to day, season to sea- 
son, and even year to year. Like 
hurricanes, the primary activity is in 
summer when the dust transport 
averages 10 to 50 times more than in 
winter, with the daily amounts vary- 
ing from about 1 to 40 micrograms 
per cubic meter. 

Variability — Air-trajectory analy- 
sis shows that the summer dust orig- 
inates over arid to semi-arid regions 
in the northwestern corner of the 
African continent, and is swept south- 
ward and toward the Caribbean by 
the strong northeasterly winds that 
exist in that sector during summer. 
The width of the dust-carrying air- 
stream is only 300 to 500 miles wide 
as it leaves the coast of Africa, and 
the depth of the dust layer is about 
12,000 feet as determined by the 
depth of mixing over Africa. Al- 
though this flow of dust is more or 



less continuous, the variations in dust 
content of the air are often quite 

Locally, the variation in dustiness 
can be due simply to a shift in the 
dust-laden airstream. In many in- 
stances, large increases in dust load- 
ing in the Caribbean can be tied to 
specific outbreaks of dust-storm ac- 
tivity over parts of North Africa. At 
other times, the increases in dust load- 
ing of the trade winds are attribut- 
able to the venting of the normally 
dusty air over the African continent 
by a favorable wind regime which 
brings air from deep in the interior of 
the Sahara into the Atlantic trade 
winds. In many cases, however, it is 
impossible to assign a cause to the 
dust outbreak or even to detect the 
variation of dust loading downwind 
from Africa without direct measure- 

Visibility — The presence of Afri- 
can dust in the Caribbean can be seen 
as thick haze, with visibility reduced 
from 20 to 30 miles, in the case of no 
haze, to only between 6 and 15 
miles. In exceptionally hazy areas of 
the Caribbean, the horizon resembles 
that on a dry day of the American 
Midwest or on a muggy day in a large 
city of the Northeast. Indeed, the 
dust loadings over the Caribbean are 
probably comparable to or greater 
than those that would be found over 
much of continental United States. 

Source — There is an abrupt 
change in the general source region 
of the dust between winter and sum- 
mer. After October and until May 
(with some rare deviations), the dust 
is ash-gray to black and is thought to 
originate over the sub-Sahara from 
the Cameroons through central Ni- 
geria and the Ivory Coast. In the 
summer, however, the flow of dust 
is primed by the strong northeasterly 
winds associated with the intense 
pressure gradient that exists between 
the low pressure of the central 
Sahara and the relatively high pres- 
sure along the western coast. Then, 

the dust is a reddish-brown color 
with a tinge of yellow. 

Particle Size — A surprising as- 
pect of the size spectra of the dust 
reaching the Caribbean is the rela- 
tively large fraction of the dust (5 to 
20 percent) with particle sizes in ex- 
cess of 10 microns. In general, the 
higher the dust loading the higher is 
the fraction of dust in the larger size 
ranges. According to Stokes, settling- 
velocity particles in excess of 10 
microns would settle out of the air 
before reaching the Caribbean unless 
they were raised to heights well in 
excess of 20,000 feet. Since the visible 
dust top is rather distinct at about 
10,000-15,000 feet over the Carib- 
bean, and is directly related to the 
top of the turbulent mixing layer over 
the Sahara, which is at about the 
same altitude, one can assume that 
virtually all the dust falls from below 
10,000-15,000 feet. Although a sub- 
stantial fraction of the dust un- 
doubtedly settles out before reaching 
the Caribbean, a certain fraction of 
all size ranges is prevented from 
being lost by the recycling of air 
(turbulent mixing) in the dust layer 
over Africa and in the trade winds. 

Vertical Distribution — Recent ob- 
servations of the vertical distribution 
of the dust show that the dust con- 
centration in the air downwind from 
the Sahara is greatest in the layer 
between the dust top and the top of 
the cumulus layer (say, 4,000 to 8,000 
feet). In the lower layers, the trade- 
wind air may be air of non-Saharan 
(or partially Saharan) origin that 
flows southward to undercut the 
original dust airstream, being thereby 
enriched by mixing and by fallout 
from above. 

Possible Relation of African Dust 
to Tropical Disturbances 

A great deal of indirect theoretical 
and observational evidence exists to 
suggest that African dust may play 
some secondary role in the growth 
or suppression of tropical disturb- 

ances and the entire energetics of 
the tropical atmosphere. Conversely, 
some observations indicate that Afri- 
can disturbances have some effect on 
the movement of dust into the Carib- 
bean and that the behavior of the 
dust is at least superficially affected 
by the presence of these wave per- 

Dust as a Nucleator — It is well 
known that the size spectra and num- 
bers of condensation nuclei have a 
profound effect on the population of 
water droplets in clouds and the 
ability of the cloud to precipitate. 
These condensation nuclei are derived 
from various types of atmospheric 
aerosols — salt particles, dust, pol- 
lution, and the like. Much research 
has been done both in the laboratory 
and in the field, to determine the 
nucleating properties of various sub- 
stances and their relative importance 
in cloud growth. 

Similarly, the formation of ice crys- 
tals from supercooled water in clouds 
depends on the presence of foreign 
freezing nuclei and on the distribu- 
tion of existing ice crystals. Almost 
any substance will nucleate ice at 
some temperature, but only a rela- 
tively few types of substances are 
efficient in this capacity — i.e., are 
able to promote freezing at tempera- 
tures warmer than about —20° cen- 
tigrade. The best-known and most 
efficient type of nuclei air crystals is 
silver iodide, which has been used in 
cloud-seeding experiments. But silver 
iodide is not found naturally in the 
air in significant quantities. The most 
efficient natural ice nuclei are the 
clay minerals — notably kaolinite, il- 
lite, and montmorillite. These three 
minerals are abundant in the soils 
of North Africa and have been found 
to be a prominent constituent in the 
African dust. Since the haze top is 
near the freezing level, the dust could 
only be effective in freezing if it 
were entrained into large cumulus 
which protrude to heights well above 
the haze top. 

Until very recently the Atlantic 
trade winds were thought of in terms 


of a maritime environment in which 
aerosol distribution was made up of 
sea-salt particles which provide the 
clouds with giant hygroscopic nuclei 
for condensation and with possible 
sites for freezing. The Barbados 
measurements, however, show that 
the bulk dust density in the air is 
greater than the expected concen- 
tration of sea-salt particles, even near 
the surface. Additional measure- 
ments made recently from aircraft 
near Barbados show that the ice 
nuclei were as high as 10 ; to 10 4 
per cubic meter in visibly dusty areas, 
values that are comparable to those 
found over the continents. At other 
times, the ice-nuclei concentrations 
were found to be negligible in areas 
of dense haze. These measurements 
suggest that the ice nuclei are deacti- 
vated under certain conditions, pos- 
sibly by surface contamination with 
Aitken nuclei, water droplets, or 
some form of pollution. 

Such ambiguities in the physics of 
ice nuclei and the lack of aerosol 
measurements in the tropics preclude 
even an educated guess as to the 
effect of African dust on the growth 
of disturbances. At present, argu- 
ments can be made for either sup- 
pression or enhancement of cloud 
growth given an abundant supply of 

Much more evidence is required to 
form a quantitative picture of how 
much dust is entering the convective 
clouds associated with the disturb- 
ances and what the distributions are 
of ice and condensation nuclei in the 
cloud environment and the popula- 
tion of ice crystals and water drops 
in the clouds. Additional aerosol and 
dust measurements need to be made 
along the African coast and by air- 
craft flying in the vicinity of African 
disturbances. A more detailed knowl- 
edge of the vertical distribution of 
dust and other aerosols should be 
sought in these flights. If efforts are 
going to be made to seed disturb- 
ances, it would be important to know 
exactly what the background seeding 
capacity of the environment is during 

a period of exceptionally high dust 
content in order to estimate the seed- 
ability of the clouds in these hazy 
areas. Aerosol measurements of any 
sort made over Africa itself would 
be most useful. 

Dust as a Tracer of Air Motion — 
Besides being an active participant in 
the condensation and energetics of 
cumulus clouds, the dust is useful as 
a tracer of air motions in the trade 
winds, thereby leading to an under- 
standing- of the dynamics of air mo- 
tion at low altitudes. Some tentative 
evidence exists showing that the 
dust transport off the African coast 
is much enhanced by the passage of 
an African disturbance south of the 
dust-producing area. Intensely hazy 
areas, visible on satellite photos, were 
concentrated immediately to the rear 
(east) of an African disturbance on 
two or three occasions in the sum- 
mer of 1969. In these particular dust 
outbreaks, the leading edge of the 
dust mass remained close to the axis 
of the easterly wave disturbance as 
it crossed the ocean and passed the 
island of Barbados. Statistics for the 
past three years show that the pas- 
sage of African disturbances by 
Barbados is accompanied by a sig- 
nificant diminution in dust loading 
just prior to its arrival and a marked 
increase, leading to maximum dust 
loading, immediately after passage of 
the wave axis by Barbados. It is not 
clear whether the disturbance actu- 
ally prevents the dust from passing 
the wave. 

Examination of radiosonde data 
shows that the temperature, stability, 
and water-vapor content of the air is 
singularly different in the dusty area. 
In general, air of high dust content 
is accompanied by a minimum of 
cloudiness. This is probably due to a 
more rapid subsidence of the strong 
northeasterly trades that are espe- 
cially susceptible to the raising of 
dust over the continent and to the 
increased stability at low levels found 
in the dusty air, rather than to an in- 
teraction of the dust with the clouds. 

Chemical, mineralogical, and 
analysis of the dust is another pos- 
sible method for determining the 
origin, composition, and seeding pos- 
sibilities of the dust. This has been 
done on a number of selected occa- 
sions using the Barbados dust sam- 
ples. The results so far are inconclu- 
sive, but they do show significant 
variations in quartz, calcite, iron, and 
other substances between winter and 
summer dust. In addition, the lead 
and zinc content of the summer dust 
is anomalously high, especially in 
comparison to the very low amounts 
of these elements in the winter dust. 
These two elements owe their abun- 
dance to industrial contamination, 
notably fossil fuels. Therefore, the 
air that carried the dust from the 
northwestern corner of the Sahara 
was likely to have been over indus- 
trial Europe immediately before its 
arrival over Africa; conversely, the 
winter dust is carried in an airstream 
of long-standing duration in the 

Measurement Techniques and 
Their Implications 

Radon-222 — Some indirect meas- 
urements of dust content can be made 
using radon-222 as a tracer of Sa- 
haran dust. Radon-222 is liberated 
from soils in large quantities and is 
mixed throughout the lower layers 
of the atmosphere in much the same 
way as water vapor and dust are 
mixed from their sources at the 
earth's surface. Unlike dust, how- 
ever, radon gas is not washed out by 
rain. This property (insolubility) can 
provide a means of studying the 
washout of dust and the later move- 
ment of Saharan air after it has 
passed through a cycle of cumulus 

Thus, radon-222 measured in the 
high troposphere may be useful in 
tracing the outflow of dusty air from 
the tops of cumulonimbus and can 
lead to a substantiation of the theory 
that the high concentrations of ice 
nuclei and dust particles sometimes 



found in the upper atmosphere are of 
terrestrial origin. Radon measure- 
ments in the southern hemisphere 
south of the North Atlantic trade 
winds can provide valuable informa- 
tion on cross-equatorial flow and the 
flow of air across the Intertropic 
Convergence Zone. 

In one aircraft expedition made by 
a U.S. research team flying between 
Miami and Dakar, a high correlation 
was found between haze and radon 
activity. This relationship between 
dust and radon activity was substan- 
tiated in further aircraft flights made 
near Barbados in I^b® and by some 
measurements made on board the 
US5 Discoverer the same year. Radon 
was also measured south of the equa- 
tor on the flight. More such flights 
and expeditions are needed to expand 
our fragmented knowledge of dust 

LIDAR — Another indirect method 
for estimating the vertical distribu- 
tion of dust is with LIDAR, which 
measures the back-scatter from a 
laser beam. However, back-scatter 

measurements are highly dependent 
on particle size and are extremely 
difficult to interpret in terms of dust 
distribution without supporting data 
to accompany them. 

Turbidity Measurement -- More 
useful than LIDAR in the study of 
dust is the measurement of turbidity 
from photometric measurements of 
skylight distribution and spectral at- 
tenuation of solar radiation. These 
turbidity measurements can also be 
compared with atmospheric back-scat- 
ter and albedo as determined from 
satellites. Atmospheric dust over the 
tropical Atlantic can have an im- 
portant effect on the energy balance 
of the tropics and, consequently, on 
the global circulation. Since the at- 
mospheric turbiditv is a function of 
the aerosol content of the air, the 
total incoming and outgoing radia- 
tion and the changes in absorptivity 
and emissivity on the vertical can af- 
fect the heating and the convective 
instability of the trade winds. There 
is some evidence that the growing 
pollution over the earth during the 
past few decades has resulted in an 

increase in atmospheric turbidity and 
a slight decline in worldwide tem- 
perature. An increase in turbidity at 
low latitudes can effect a decrease in 
worldwide temperature and a slowing 
down of the general circulation of the 
whole earth. At present there is 
some question as to the cause of the 
turbidity increase over the years. It 
may actually be due to natural causes 
such as volcanic eruptions or changes 
in dust content of the air rather than 
to industrial pollution. Since sig- 
nificant changes in dust loading from 
year to year do occur in the Atlantic 
trade winds (the amount of dust 
reaching Barbados in the summer of 
1969 was double that in the previous 
four years of record), it would there- 
fore be useful to measure turbidity in 
the Atlantic trade-wind area on a 
yearly basis in order to determine the 
natural fluctuation in the components 
of the radiation balance there. 

African dust may thus influence 
tropical storm development indirectly, 
by means of its capacity to alter the 
long-term thermodynamics of the 
tropical environment. 







Estimating Future Water Supply and Usage 

Most estimates of water supply 
and usage have been couched in terms 
of average annual water supply and 
projected usage at some future date. 
For small areas within the scope of 
a single project or a system of proj- 
ects, water supply is sometimes stated 
as the mean flow available during 
the most critical dry period in the 
record. Such assessments have the 
virtue of simplicity and are reason- 
ably well understood by the layman. 

At the national level, a statement 
of mean water supply and mean 
usage is probably entirely adequate 
because water-supply problems are 
never solved at that level. At the 
regional and local level, however, 
use of the mean supply available 
and a projected future usage deprives 
the planner of the opportunity for 
strategic evaluation of alternatives. 
The planner is concerned with sup- 
plying water for a specific period of 
years into the future. It is virtually 
certain that the actual streamflows 
during this future period will not 
duplicate those of the historic past 
and that water usage at the end of 
the period will not precisely equal the 
forecast. Faced with such uncertainty, 
the planner would be wise to treat 
both variables in terms of probability. 
Only through a probabilistic treat- 
ment can he evaluate the risk of 
expanding water-supply facilities too 
fast, with consequent excessive costs 
and risk of losing future technologi- 
cal advantages, or of developing a 
system so slowly as to threaten a 
serious water shortage at some future 

Estimates of Water Supply 

The data base for estimates of 
water supply consist of approxi- 
mately 10,000 gauging stations oper- 
ated mostly by the U.S. Geological 

Survey; in addition, many thousands 
of wells provide information on 
groundwater levels. There may be 
specific local deficiencies in this data 
base, but on the whole it must be 
judged reasonably adequate. It is 
fortunate that this base exists, be- 
cause only time can remedy deficien- 
cies — from 30 to 100 years of record 
are required to describe statistically 
the characteristics of water supply. 

Qualifying Factors — Interpretation 
of existing data on streamflow and 
groundwater is complicated by the 
fact that few stations record virgin 
conditions. Regulation by reservoirs, 
diversion from streams, pumpage 
from groundwater, alteration of 
stream channels, vegetation-manage- 
ment practices, urbanization, and 
many other factors render available 
data series inhomogeneous over time. 
In some cases, the effect of man's 
activity is rather accurately known 
and appropriate corrections can be 
made. In most instances, however, 
only the sign of the change can be 
stated with accuracy. 

Synthetic Streamflow Records — 
The last decade has seen the devel- 
opment of hydrologic simulation us- 
ing both digital and analogue com- 
puters. Simulation is capable of 
transforming precipitation data into 
synthetic streamflow records. Simu- 
lation brings many thousands of 
precipitation stations operated by the 
National Weather Service into the 
data base and makes it possible to 
make streamflow estimates at sites 
where no gauging station exists. Be- 
cause precipitation records are gen- 
erally longer than streamflow records, 
simulation permits the extension of 
flow records at currently gauged sites. 

Similar development has taken 
place with respect to simulation of 

groundwater basins primarily through 
the use of analogue models. Al- 
though these models cannot perfectly 
reproduce historic streamflow or 
groundwater basin performance be- 
cause of errors in the data inputs 
and deficiencies in the models them- 
selves, errors in model outputs are 
generally random and pose no serious 
problem in probabilistic estimates of 
water supply. Simulation models also 
permit adjustment of observed flows 
or groundwater levels to virgin or 
natural conditions. It may be con- 
cluded, therefore, that we are now 
able to combine observed and syn- 
thesized data into a data base cover- 
ing a sufficient period of time to de- 
fine the mean and variance of water 
supplies with reasonable accuracy. 

Problems of Data Projection — The 
historic data base, observed or simu- 
lated, does not fully satisfy the need 
for projections of future water sup- 
ply, however. The water-supply 
planner is concerned with possible 
events over a specific period ranging 
from 20 to 100 years in the future. 
He is particularly concerned with 
the sequences of annual flows, be- 
cause a series of consecutive dry 
years will impose a much greater 
burden on his reservoir (surface or 
subsurface) than the same number 
of dry years dispersed over his plan- 
ning horizon. To meet this problem, 
the field of stochastic hydrology has 
developed during the 1960's. 

Stochastic Hydrology — In stochas- 
tic hydrology, generating functions 
derived from the estimated statistical 
characteristics of the historic record 
are used in conjunction with random 
numbers to generate many possible 
flow sequences. Thus, a thousand 
years of stochastic streamflow can 
be broken into ten 100-year periods, 
from which the planner can estimate 



the probability that a proposed reser- 
voir will be adequate against any of 
these ten alternative futures. 

Streamflow is inherently more vari- 
able than precipitation and it is fair 
to assume that we know the statisti- 
cal parameters of precipitation with 
greater accuracy than those for 
streamflow. It follows that the 
stochastic generation of precipitation 
data should be a more certain process 
than stochastic generation of stream- 
flow data. Stochastically generated 
precipitation data can be converted 
to streamflow by deterministic simu- 
lation models, although the process 
would be substantially more expen- 
sive than direct stochastic generation 
of flow data, since deterministic 
simulation is inherently more com- 
plex and time-consuming. Preliminary 
work on stochastic generation of 
rainfall has recently begun, but fur- 
ther research should be encouraged. 

The Relevance of Climate — In 
addition to the stochastic properties 
of future streamflow, a number of 
other issues arise before the planner 
can be content with his projections 
of future water supply. The first of 
these is the question of long-term 
climatic trends. An abundance of 
data demonstrates the existence of 
such trends in terms of geologic time 
and in terms of periods as short as 
a few hundred years. However, no 
sound basis exists for predicting the 
existence of a trend and its conse- 
quences over the next century. Cli- 
matic trends could alter the water- 
supply outlook in arid and semi-arid 
regions, since the hydrologic balance 
is sensitive to small changes in pre- 
cipitation input or evapotranspiration 
outgo. Techniques that could iden- 
tify causes and project trends, even 
in an approximate fashion, would be 
extremely valuable to the water- 
resource planner. 

The Relevance of Human Activ- 
ity — In addition to natural climatic 
trends, future water supplies may 
be affected by man-induced changes, 
both intentional and inadvertent. 

Intentional changes include those 
brought about by land-management 
practices, vegetation management, de- 
salinating of brackish or saline wa- 
ters, or effective reclamation of waste 
water. The question that confronts 
the planner is "Will any of these 
become practically useful and if so 
when?" The issue is the evaluation 
of probable rates of technological 
advance. It will be seen that similar 
questions arise in the discussion of 
water usage. 

Inadvertent changes in water sup- 
ply may be brought about by urban- 
ization, which increases surface runoff 
and decreases infiltration to ground- 
water. If one can make reasonable 
projections of future urban growth, 
deterministic hydrologic models can 
project the alterations in streamflow 
and accretion to groundwater. More 
subtle are the effects of air pollution, 
urbanization, and changes in land 
use and vegetative cover as they 
may affect climate. These possibilities 
underline the importance of research 
on climatic change. 

Estimates of Water Use 

The problem of predicting future 
water use is far more complex than 
that of predicting water supply, if 
only because of the much larger 
number of components that must 
enter the forecast. It is convenient 
to divide the discussion of water 
use into the requirements for the 
several purposes to which water is 
most commonly applied. Before each 
of these purposes is discussed, how- 
ever, two general topics should be 

General Considerations — First, the 
distinction between diversion and 
consumption should be underlined. 
For many purposes, large quantities 
of water are diverted for use but only 
a small fraction of the diverted water 
is consumed; the rest is returned 
to the environment — sometimes de- 
graded in quality. (See Figure VII-1) 
An outstanding example is the use of 

water for cooling in industry and 
power generation, which actually con- 
sumes very little water; most of the 
water used is returned to a stream 
or to the groundwater substantially 
warmer than when originally diverted. 

Because of the re-use aspects, dis- 
cussion of diversion requirements is 
confusing. Here we will consider 
only consumptive use. Consumptive 
use is defined as that portion of the 
water which is evaporated or com- 
bined in the product so that it is no 
longer available for re-use in the 
original source system. 

A second topic which deserves 
consideration on a general basis is 
that of population forecasting. For 
nearly all water uses, estimates of 
population and its geographic dis- 
tribution are fundamental. If prob- 
ability estimates of future water use 
are to be derived, they must begin 
with estimates of probable future 
population. Research has been done 
on the variance of population esti- 
mates as indicated by statistical eval- 
uation of historic predictions. A 
more fundamental study might ex- 
plore the uncertainties in each of 
the factors involved in population 

The most difficult problem is the 
forecasting of local population by 
county or city units. Factors that do 
not enter national population fore- 
casting are involved in predictions 
of the distribution of population. 
Not the least of the factors that may 
affect future distributions is govern- 
ment policy concerning desirable 
population distribution. Some re- 
search on the optimal size of popu- 
lation concentrations may be useful. 
Is there a city size at which the 
unit cost of infrastructure is mini- 
mized? What are the advantages of 
population dispersal against increased 
growth of major metropolitan cen- 

Domestic Water Use — The ques- 
tion of domestic water requirements 
depends largely on two issues. One 



from Crop Area 

before returning 
Water Resource 

The diagram shows schematically what becomes of water diverted for irrigation 
purposes in the U.S. The width of the stream represents the relative quantity of 
water moving in that path. Water is consumed by evaporation from various sources 
and evapotranspiration from irrigated areas. This reduces the water supply available 
for sequential uses. The non-consumptive paths such as seepage, runoff, and perco- 
lation return water to the resource pool, leaving it available for subsequent uses. 
This return water may improve or degrade the water quality depending on the initial 
quality of the water, the uses to which it has been put, and the particular character- 
istics desired by the sequential users. 

is the technology of water use. Plan- 
ners have generally assumed a slow 
increase in per capita water require- 
ments. It should not, however, be 
exceptionally difficult to redesign 
conventional plumbing fixtures and 
water-using appliances so that water- 
use rates are reduced without sacrific- 
ing the amenities of present users. 

The second factor that might sig- 
nificantly affect domestic consump- 
tion would be changes in life styles. 
A shift from dispersed single-family 
residences to multi-family residences 
would be the most significant change. 
Savings in water would be achieved 
through reduction in lawn and garden 
water requirements. Changes of this 
kind are probably closely related 
to technology through construction 
costs, transportation techniques, dis- 
position of leisure time, and public 
policy with respect to taxation. Sub- 

jects for research on the impact of 
technology on society in this area are 

Industrial Water Use — The aver- 
age values of industrial water use per 
unit of product produced are ex- 
tremely large in many industries. 
There are, however, many opportu- 
nities for reducing water use by re- 
cycling, recovery of by-products, and 
other techniques. Estimates of future 
industrial use are dependent on esti- 
mates of future industrial production 
and the extent to which water- 
conservation techniques are applied. 

Water in Agriculture — The largest 
water-using sector in the United 
States today is irrigated agriculture. 
In states like California, over 90 
percent of the water use is for irriga- 
tion. Future agricultural water re- 
quirements are therefore extraordi- 

narily important. Unfortunately, they 
are difficult to assess. What are the 
future needs for food and fiber 
production? How much food and 
fiber will the United States produce 
for export? How much can food and 
fiber production in the humid eastern 
states be expanded? How can water- 
use efficiency in agriculture be im- 
proved? What is the possibility of 
breeding crop types requiring less 
water or capable of using brackish 
water instead of fresh water? To what 
extent will it be possible to raise 
crops in arid regions in controlled 
environment chambers? Will exten- 
sive, low-cost greenhouses in which 
water use can be carefully controlled 
become technically feasible? These 
questions all involve issues of tech- 
nical feasibility, extent to which 
efficiency of production can be im- 
proved, and time-rate at which these 
developments can be expected. 

Energy Production — The consump- 
tive water requirements for the pro- 
duction of electric energy are rela- 
tively small. A hydroelectric power 
plant actually consumes only small 
amounts of water evaporated from 
the reservoir surface. A thermal plant 
consumes the water evaporated in 
cooling the condensers. If predictions 
that power demands will continue to 
double every decade (thousand-fold 
increase in 100 years) prove accurate, 
however, the current relatively small 
use will grow rapidly into a major 
source of water consumption. 

Again, the projection of water re- 
quirements for power production 
raises mainly technological issues. 
What are the prospects for new types 
of thermal power producers for which 
cooling- water requirements are less? 
Are there possibilities of cooling 
methods that are less demanding on 
the water resource? Use of heated 
condenser water for irrigation shows 
promise of minimizing the "thermal 
pollution" of streams and improving 
the efficiency of irrigation. Not all 
thermal power plants can be situated 
close to potential irrigated areas, 
however. What other uses of waste 



heat may be feasible? Is it conceiv- 
able that per capita power require- 
ments may be reduced by reducing 
power requirements in the home and 
industry? Can climate control be 
achieved? Will future urban centers 
require less energy and water for 

Navigation — Navigation is not an 
extremely heavy user. Evaporation 
losses from reservoirs from which 
water is released to maintain navi- 
gable depths downstream constitute 
the primary consumptive use. The 
quantity is probably so small that 
it deserves little consideration as com- 
pared with other demands on our 
water resource. However, it is appro- 
priate to ask what future transporta- 
tion technology may be expected. 
Will relatively slow, bulk transport 
by water continue to be a favored 
procedure? Will high-speed surface 
or air transport encroach on the 
market for bulk transport to the 
point where future expansion of navi- 
gation facilities may stop? 

Recreation — Like navigation, rec- 
reation is not presently a heavy con- 

sumer of water. Primary water use 
by recreation is evaporation from 
reservoirs constructed solely for wa- 
ter recreation or from an increased 
water surface area in reservoirs be- 
cause of projected recreation. It is 
unlikely that reservoirs will be built 
solely for recreational purposes in 
water-short areas. Recreation does 
not appear to be a factor of great 
uncertainty with respect to future 
water use. However, it may be ap- 
propriate to mention here the pos- 
sibility of evaporation suppression 
from water surfaces by the use of 
film-forming chemicals or covers. If 
successful techniques for evapora- 
tion suppression could be achieved, 
requirements for many of the uses 
discussed above could be reduced. 

Fish and Wildlife — It is currently 
accepted that the maintenance of 
fish and wildlife requires that a con- 
tinued flow be maintained. A sub- 
stantial part of this flow is eventually 
discharged into the oceans where it 
can no longer be used. Water re- 
quirements for this purpose are 
surely not well known. The mech- 
anisms by which a reduction in dis- 

charge into estuaries may affect 
marine life need to be established. 
This need derives from two com- 
peting aspects. We need to know 
how much water must be permitted 
to flow to the oceans in order to 
maintain fisheries for both economic 
and sports purposes, and the extent 
to which this fresh-water flow in- 
fluences other estuarine and oceanic 
resources. We also need to know 
the consequences of excessive flood 
flows through estuaries. Are such 
flows beneficial or detrimental? In 
addition to the consequences for fish- 
eries and wildlife, what are the effects 
of regulating streamflows to the 
ocean on sediment deposits in es- 
tuaries and harbors and on nourish- 
ment of beaches? 

In summary, probability estimates 
of water supply are limited only by 
hydrologic understanding, and solu- 
tions appear to be close at hand. 
Projections of water usage are heavily 
dependent on projections of new 
technology. Little effort has been 
devoted to this latter problem and, 
therefore, current projections of use 
are quite uncertain. 

Water Movement and Storage in Plants and Soils 

Since only five feet of soil can 
generally store fully ten inches of 
precipitation and since evaporation 
from soil and foliage returns to the 
air about 70 percent of our precipi- 
tation, these two factors represent a 
significant portion of the hydrologic 
cycle and a determinant of our water 
resources. (See Figure VII-2) Further, 
and less often noted, the relations of 
precipitation, evaporation, and stor- 
age will determine the escape of 
soluble substances such as nitrate 
from the region of roots and into 
groundwater and streams. 

Because the plant roots are inter- 
twined among the soil particles and 
water flows readily from one to the 
other, plant and soil — and, for that 
matter, the atmosphere as well — - 

must be analyzed as a continuous 
system. Then the components can 
be examined in order of their impact 
on the system, and the results used 
to improve our understanding and 
ability to predict the functioning of 
the entire system outdoors. Fortu- 
nately, our ability to cope with the 
entire system has been advanced 
materially in recent years. 

Total Evaporation 

Essentially, the soil-plant-water 
problem is to measure the extraction 
from the soil, conduction to the 
leaves, and then evaporation from 
the leaves. Some water may short- 
circuit this path and be evaporated 
from the soil or leach beyond the 

roots, but a lot — often most — 
takes the route of soil to plant to air. 

Evaporation from the Canopy — 
Recently, research has greatly im- 
proved our understanding of how 
water gets from the canopy of foliage 
to the atmosphere above. When 
evaporation from the canopy strata is 
viewed as a factor in an energy 
budget and evaporation and convec- 
tion are set proportional to tempera- 
ture and humidity differences, the 
evaporation (and the temperature and 
humidity of the air within the canopy 
microclimate) can be calculated from 
the weather above and below the 
canopy, the profiles of radiation and 
ventilation, the distribution of foliage 
area, and the boundary layer and 
stomatal resistance of the foliage. In 






Domestic and 
Industrial needs 

(Units of measure in cc) 

S — Surface Runoff 
P — Percolation 
U— Uptake 
R — Residual 

This is an idealized version of the water cycle. The numbers attached to the various 
processes are relative units of measure. Note that the truly important parts of the 
cycle are evaporation from the sea, precipitation, and evapotranspiration. 

1956, Penman showed how evapora- 
tion from abundant foliage suffici- 
ently wet to have wide stomata could 
be calculated from the net all-wave 
radiation available above the canopy. 
The recent advance is, therefore, in 
understanding how foliage condition 
can decrease evaporation below Pen- 
man's potential and how the evapora- 
tion and consequent temperature and 
humidity within the canopy are 
changed. The total evaporation from 
the canopy, according to our new 
understanding, is affected profoundly 

by the leaf area and, more subtly, but 
still considerably, by the stomatal 
conductivity or porosity of the foliage 
for water. 

Future Observations and Experi- 
ments — This understanding has been 
arrived at by means of mathematical 
simulation. To make a substantial 
improvement in our understanding — 
or even to test our present under- 
standing — future measurements of 
evaporation from crops and trees 
must include observations of leaf 

area and porosity as well as weather 
and evaporation. Fortunately, since 
the invention of a simple, portable 
porometer by Wallihan in 1964 and 
the subsequent calibration of several 
modifications, porosity can easily be 

Earlier hydrologic observations sug- 
gested that different vegetation con- 
sumed different amounts of water 
in evaporation. The simulators men- 
tioned above, along with experiments 
with sprays that shrink stomata, have 
now established that evaporation can 
be changed by modest changes in 
the canopy. During the coming years, 
therefore, one can expect a variety of 
experiments seeking the most effec- 
tive and least injurious ways of con- 
serving water in the soil through 
treating or modifying the vegetation. 

Microclimatic Measurements 

Turning to the distribution of 
evaporation, temperature, and hu- 
midity within the canopy — in con- 
trast to the sum of evaporation 
discussed above — one finds that a 
greater number of parameters can 
be effective. The changes in tem- 
perature and humidity along the path 
conducting water and sensible heat 
out of the canopy depend on the 
boundary layer around the leaf and 
the turbulence of the bulk air within 
the canopy. These two factors gen- 
erally are of smaller magnitude than 
the stomatal resistance and hence 
are relatively ineffective, we believe, 
in changing the sum of evaporation. 
However, when we turn to the 
distribution of temperature and hu- 
midity within the canopy — the mi- 
croclimatic question — these param- 
eters are influential. Scientists do 
not yet know how to measure them, 

Boundary-Layer Resistance — For- 
merly, this was estimated from a 
conventional fluid mechanics equa- 
tion, employing the square root of 
leaf dimension divided by wind 
speed. Recently, however, Hunt and 



others have claimed that this estimate 
is greater than the true resistance 
within a canopy. Presumably, this 
question can be resolved by fluid 
mechanics, energy budgets, and the 
new porometers. 

Diffusivity Within the Canopy — 
This is harder to measure. At present 
it is estimated by measurements 
of radiation absorption and tem- 
perature and humidity gradients. 
The method is susceptible to error, 
produces estimates at variance with 
the wind speed and employs the very 
temperatures and humidities that one 
would like to predict. A new method 
of estimating diffusivities within the 
canopy is required but none has yet 

Microclimatic observation will un- 
doubtedly continue in the future. If 
the observations are to be most useful 
in testing and improving our under- 
standing, they should include the ver- 
tical variation in leaf area and poros- 
ity as well as radiation, temperature, 
humidity, and ventilation. Since this 
makes a formidable list of equip- 
ment and tasks before a complete, 
and hence worthwhile, set of ob- 
servations can be made, microclimatic 
and evaporation studies seem ideally 
suited as testing grounds for coop- 
erative or integrated teams of sci- 

Horizontal Heterogeneities — The 
final remark concerning the aerial 
portion of the problem must concern 
horizontal heterogeneity and advec- 
tion. Chimneys and sun flecks among 
the foliage clearly render our ideal, 
stratified models unrealistic. There- 
fore, efforts to incorporate these 
heterogeneities into the analysis are 
welcomed, even if they only prove 
that the ideal, homogeneous model 

gives the same average evaporation 
and microclimate as the realistic 

The larger heterogeneities con- 
noted by "advection" are known to 
be important, justifying the term 
"oasis effect." Advection of carbon 
dioxide has already been treated 
simply in a photosynthesis model, 
and incorporating large-scale advec- 
tion into the existing evaporation 
models seems manageable and worth- 

Water Storage in Soil 

The transport of water to foliage 
from soil has not yet been mentioned. 
Relatively less can be said about it 
in a systematic way. As a comple- 
ment to the simulation of evaporation 
from foliage, we need a comprehen- 
sive simulator of this portion of 
the path of the water that will tell 
us how much water gets to the leaves 
and, more important, how stomatal 
resistance is modified. The simulator 
concerning soil and plant is more 
lacking in foundation than one con- 
cerning plant and air. Nevertheless, 
beginnings have been made by Cowan 
and Raschke. 

Gaps in Scientific Understanding — 
These primitive simulators reveal se- 
rious deficiencies in our understand- 
ing of (a) the relation between water 
potential in the leaves and stomatal 
resistance; (b) the conductivity of 
different root regions; and (c) the 
conductivity between soil and roots. 
This last matter includes the dif- 
ficult problem of root distribution 
through the soil profile. The actual 
storage capacity of the soil and rela- 
tion between potential and content 
seem fairly well established. The 
effect of changes of temperature in 

time and depth is yet to be coped 


New instruments usable in the 
field should help. The new porom- 
eters have been mentioned already, 
and the Scholander pressure chamber 
promises to reveal water potentials, 
even in roots. We are still left, how- 
ever, to search for root distributions. 
In the case of temperature differences, 
on the other hand, the problem is 
to improve our logic rather than our 

The next problem is the escape of 
water from soil storage via a moist 
surface or by leaching rather than 
through vegetation. These two es- 
capes greatly affect the loss from the 
root zone of salts and nutrients that 
pollute the water below. Evaporation 
and land leaching from the soil have 
been measured carefully in bare soil, 
but the present challenge is to under- 
stand the parameters sufficiently well 
to estimate them when a canopy of 
foliage is also removing water. This 
is a fundamental problem of the 
movement and loss of water from a 
heterogeneous porous medium with a 
variable and heterogeneous tempera- 
ture. The research of the past has 
not brought us a lucid understanding 
of the system; at present, progress 
seems most likely to come from de- 
vising a better logical framework on 
which to hang our measurements. 

A Final Word 

The reader may have noticed that 
time has not been mentioned. That 
is, analyses or simulators of an in- 
stant only have been described. In- 
tellectual satisfaction and eventual 
utility requires that our understand- 
ing and predictors be extended 
through time, with the storage of 
plant and soil as parameters. 



A Note on Subsidence and the Exhaustion of Water-Bearing 
and Oil-Bearing Formations 

Virtually all rocks near the earth's 
surface are to some degree porous, 
and if water is available it fills the 
pores. In some rocks the pores are 
large enough and well enough inter- 
connected so that water can readily 
flow from volumes of higher pres- 
sure to volumes of lower; such rocks 
are called aquifers — water bearers. 
Other rocks have pores so fine and 
so poorly interconnected that water 
passes through them only slowly, 
even under high pressure-gradients; 
these are aquitards — water-retarders. 
Among the common rocks, sand- 
stones, conglomerates, cavernous 
limestone, and scoriaceous lavas are 
the chief aquifers; shales are the 
principal aquitards. 


Where water has access to an inter- 
bedded series of aquitards and aqui- 
fers both are commonly saturated, 
but the aquitards are sufficiently im- 
permeable as to permit considerable 
pressure differences to exist between 
the several aquifers. When a well is 
drilled to any particular confined 
aquifer and water is withdrawn from 
it, the water pressure in the aquifer 
is decreased and the aquifer shrinks 
in thickness. The weight of the 
rocks overlying the aquifer, which 
had formerly been in part sustained 
by the pressure of the contained 
water on the base of the overlying 
aquitard, has become effectively 
greater because of the decrease in 
hydrostatic pressure; under the ef- 
fectively greater load, the aquifer 
yields elastically and the volume of 
its pores diminishes. 

Though Young's modulus for most 
sandstones is between 140,000 and 
500,000 pounds per square inch, a 
significant pressure reduction in an 
aquifer several hundred feet thick 
can readily cause a subsidence of 
several feet at the surface of the 

ground. Such a subsidence may cre- 
ate serious problems in drainage, sew- 
age disposal, and utility maintenance. 
More important than simple elastic 
compression of the aquifers, how- 
ever, is the fact that the lowered 
pressure in the aquifers permits slow 
drainage into them from adjoining 
or interbedded aquitards. This per- 
mits the aquitards also to be com- 
pressed by shrinking their pore 

Thus, at the Wilmington oil field, 
in California, the loss of pressure 
in the oil sands after 1936, when 
production on a large scale began, 
led to a surface subsidence of more 
than 32 feet (see Figure VII-3) before 
recharging of the oil sands with sea 
water under pressure finally stabilized 
the surface. Of this subsidence, only 
about 10 feet could be attributed to 

elastic compression of the oil sands; 
the remaining 22 feet was almost cer- 
tainly due to de-watering of the asso- 
ciated shales. The cost of this sub- 
sidence was many millions of dollars, 
since the railroad terminals, docks, 
shipyards, drydocks, and power 
plants had all to be rebuilt, together 
with the streets, water, and sewer 
systems of a large part of the city 
of Wilmington. 

Similar subsidence caused by with- 
drawal of fluids under pressure has 
been noted at many other seaside 
localities: Lake Maracaibo, Vene- 
zuela; Goose Creek, Texas; Hunting- 
ton Beach, California; Redondo Beach, 
California. None caused as great a 
loss as that at Wilmington. 

It is possible for similar subsidence 
to pass unnoticed at areas inland be- 


(Illustration Courtesy of the Geological Society of America ) 
Superimposed on the photograph of the port area of Long Beach, California are 
contours of equal subsidence in feet as they existed in 1962. The subsidence in 
the upper right resulted from withdrawal of fluid from the Signal Hill oil field be- 
tween 1928 and 1962. The major subsidence in the foreground was due to with- 
drawal from the Wilmington oil field. 



cause a definite reference surface is 
not obvious. Nevertheless, the failure 
of the Baldwin Hills Dam in western 
Los Angeles, with the loss of many 
lives and millions in property damage, 
was probably due to withdrawal of 
fluids from the underlying oil field. 
The subsidence of many feet beneath 
the city of Mexico was caused by 
withdrawal of water from the lake 
sediments on which the city was 
built; considerable expenditures have 
been needed to take care of drainage 

Two other areas in California have 
suffered large losses through with- 
drawal of water from beneath. In the 
Santa Clara Valley, pumping of water 
from a confined aquifer at depth has 
led to subsidence as great as 9 feet 
between 1934 and 1959 in the city 
of San Jose; subsidence has also 
been considerable farther north in 
the valley, including such important 
industrial areas as Sunnyvale. 

On the west side of the San Joa- 
quin Valley, dewatering of surficial 
sediments had caused the surface 
to subside as much as 23 feet by 
1963 and forced alterations in the 
plans for the new irrigation system 
now under construction. 

Exhaustion of Groundwater 

Most of the agricultural produc- 
tion of the High Plains of Texas and 
eastern New Mexico tributary to the 
cities of Lubbock, Amarillo, and Por- 
tales depends on water pumped from 
the Ogallala Formation, of Pliocene 
age. The Ogallala is composed of 
gravel and sand that was deposited 
as a piedmont fan from the Rocky 
Mountains to the northwest. Erosion 
since its deposition has cut deeply 
enough to sever the connection with 
the mountain streams whose sedi- 
ments led to the formation. The result 
is that water pumped from the forma- 
tion is not being recharged from 
the mountains; the small amount of 
recharge that feeds into the under- 
ground reservoir is simply seepage 
from the overlying arid surface. Es- 
timates by the Texas Agricultural 
Experiment Station were that re- 
charge amounts to only about 104,000 
to 346,000 acre feet of water for 
the Texas portion of the High Plains, 
whereas pumpage averaged 5 million 
acre feet during the period from 
1954 to 1961. Obviously, the water 
table is sinking at a tremendous 
rate, ranging from 1.34 to as much 
as 3.72 feet per year, and the cost 
of pumping is rising accordingly. 
The water is being mined, just as 

literally as is coal from a coal seam, 
and a drastic change in the economics 
of the region is unavoidable. 

The Texas study projects the de- 
cline in irrigated acreage from 3.5 
million acres in 1966 to 125,000 acres 
in 2015. Cotton production is ex- 
pected to decline from about a million 
bales in 1966 to 355,000 bales in 
2015, of which 70 percent will be 
grown on dry land. At 1966 prices, 
the aggregate annual value of agricul- 
tural production is projected to de- 
cline 70 percent in fifty years. Drastic 
economic change is clearly in sight, 
not only for the farm operators but 
for suppliers of farm machinery, auto- 
mobiles, and other inputs into agri- 
culture. Urban decline is also in- 

Water is being mined at many 
other places west of the 100th merid- 
ian — notably in the Mojave Desert 
of California and many of the inter- 
montane valleys of the Basin and 
Range Province in Arizona, Cali- 
fornia, Nevada, Utah, and Oregon. 
In each of these, results comparable 
to the inevitable decline of the High 
Plains are foreseeable, though the 
rate of decline will vary from area 
to area. 



Water Quality in Forests 

Lands classified as forest, approxi- 
mately three-quarters of them in 
private ownership (see Figure VII-4) 
make up almost exactly one-third of 
the total land area of the United 
States. A large portion of this is well 
supplied with precipitation, and the 
excess over that lost by evapotran- 
spiration is the source of much of the 
water reaching streams, lakes, and 
ground waters. 

Water issuing from essentially un- 
disturbed forests, even those on steep 
terrain or with thin or erosive soils, 
is ordinarily of high quality — low 
in dissolved and suspended matter 
except during major floods, high in 
oxygen content, relatively low in 
temperature, and substantially free 
of microbial pollutants. These qual- 
ities are desirable and highly visible 
to recreational users of these lands, 
and some are absolutely essential to 
fish such as trout and salmon. They 
are also highly important to down- 
stream users, whether agricultural, 
urban, or industrial. In addition to 
any legal rights these users may have 
acquired to water volume, they of- 
ten have built-in dependencies — aes- 
thetic, technical, or economic — on 
quality features; they are commonly 
prepared to resist any real or prospec- 
tive impairment, regardless of the 
interests of the owners of the lands 
from which the water comes or other 
social claims on its use. 

Nevertheless, these water-yielding 
lands are required for a variety of 
other goods and social purposes — 
timber, recreation in many forms, 
grazing and wildlife production. A 
very large proportion of public and 
private land is held especially for 
such uses, whereas only rarely is 
there any direct recompense to the 
landholder for the outflowing waters. 
Despite contrary advocacy, it will sel- 
dom be defensible to propose water 

quality as the exclusive goal of forest 
land management over large areas. 

Now, all uses, all manipulation of 
soil and vegetation, pose some poten- 
tial risk to water quality — sometimes 
major, sometimes trivial. Even wild- 
erness camping, construction of roads 
essential for adequate fire protection, 
or forest cutting or herbicide treat- 
ments to reduce transpiration and so 
increase water yield conceivably could 
affect water quality adversely. Ac- 
cordingly, conflict between absolutely 
unaltered water quality and other 
land uses will likely be inevitable 
at times, and may have to be resolved 
on economic or political grounds. 
Moreover, conflicts between compet- 
ing land uses — as forage versus 

timber, large game versus domestic 
animal grazing, industrial raw ma- 
terials versus scenic impact — may 
be resolved on grounds other than 
water quality. 

But there is abundant evidence — 
chiefly from U.S. Forest Service ex- 
perimental watersheds — to demon- 
strate that other uses of watershed 
lands either already are or can be 
made compatible with essentially un- 
impaired water quality. A variety of 
techniques and constraints will be 
needed, such as where and how roads 
are built, the nature and timing of 
silvicultural or harvesting practices, 
how recreationists travel and camp. 
Many of these are known already; 
others are under investigation; still 


The diagram shows the forest ownership pattern in the U.S. in 1952. Federal, state, 
and local governments owned only 27 percent of the forest land. An additional 
13 percent was under the control of forest industries. Such a situation makes forest 
management difficult because many private owners lack the incentive, knowledge, 
or interest to use approved forestry practices on their lands. 



others must be devised. In a few 
fragile landscapes only limited ac- 
cess and use may be allowable. 

There can be no simple, universal 
prescriptions for reconciling conflict- 
ing uses with each other or with 
water quality. The land classified as 
forest comprises an enormous number 
of combinations of vegetation types, 
soil and bedrock characteristics, land- 
forms and slopes, and climatic re- 
gimes. The latter include variation 
in total precipitation, its distribution 
and intensity on the watersheds, and 
features such as snowpack accumula- 
tion. This great number of combina- 
tions prevents easy generalization of 
studies on one watershed to others 
with different soil, slope, or precipi- 
tation features. It also emphasizes 
the need for much better charac- 
terization data — climate, geology, 
hydrology, and soils — for many 
important watershed regions, for in- 
vestigation of predictive models, and 
for expansion of on-the-ground 
"adaptive research" aimed at convert- 
ing principles discovered thus far into 
locally feasible guides for day-to- 
day operation. 

Factors Affecting Water Quality 

To a considerable degree, water 
quality has always figured in a larger 
concern with the protective function 
of forest cover upon stream flow — 
that is, flood control, water yield, 
and watershed maintenance or im- 
provement. The same natural or 
man-induced features that make for 
low infiltration rates, rapid surface 
runoff, and reduced storage in the 
soil mantle also lead variously to 
higher flood peaks but reduced flows 
in low water periods, to surface ero- 
sion and channel cutting, to sedimen- 
tation of downstream channels and 
empoundments, and to high turbidi- 
ties and sometimes high contents of 
material swept from the soil surface. 
Thus, turbidity and sediment content 
are valuable indices of impairment 
or improvement of the protective 
function of watershed, in addition 

to being direct measures of water 

Water "quality" is a nebulous fea- 
ture until described in terms of spe- 
cific attributes such as turbidity, or- 
ganic content, temperature, nitrate, 
phosphate, pesticide or other chemical 
content, and bacteriological quality. 
These are sometimes discussed as 
considerations of equal probability, 
hazard, and rank, but in fact, turn 
out to be far from equivalent in any 

Temperature and Oxygen — Re- 
moval of trees or brush greatly in- 
creases direct radiation to small 
streams and materially raises maxi- 
mum temperatures in the warm sea- 
sons — up to 7 to 8 centigrade 
higher, according to some studies. 
Such increases may be unfavorable 
or lethal to desirable fish, especially 
to salmon and trout species which 
spawn in small headwater streams, 
and they also contribute to higher 
average temperatures of downstream 
waters. The physical basis of this 
effect is fairly straightforward, of 
course, and the temperature in- 
crease of small streams has been 
predicted quite accurately through 
use of an energy-balance technique. 

Experimental observations are lim- 
ited and there can be no generaliza- 
tion about the importance of this 
effect to quite different climate and 
ecological regions. Within the Pacific 
Northwest, however, knowledge of 
temperature increase and oxygen de- 
crease following removal of cover 
is sufficient to call for protection of 
spawning waters. 

A highly effective management 
remedy is to leave narrow strips of 
live vegetation for shade; such strips 
are also important safeguards against 
stream or bank disturbance by log- 
ging operations. Such remedies may 
entail substantial sacrifice of timber 
values, as well as higher costs for 
harvesting and regeneration, and ap- 
plication may well hinge on benefit/ 
cost analyses. Further, one can fore- 

see occasional instances of con- 
flict between retention of shade and 
decreased water temperature on the 
one hand, and efforts to increase 
low water flow through reducing 
vegetation in the riparian zone on 
the other. 

Pesticides — A number of plant- 
protection or plant-control chemicals 
have been applied to forest vegeta- 
tion, and the need for such agents 
will certainly continue even though 
particular classes of compounds, such 
as chlorinated hydrocarbons, are 
banned. Reduction of losses during 
major insect outbreaks, control of 
competing vegetation, and protection 
of new plantations or regeneration 
areas are three common situations 
in which use of chemicals might be 
essential to timber, recreation, or 
watershed values. 

In principle, any such materials 
might enter streams either by direct 
application from aircraft or sprayers, 
or after washing over or through 
the soil, or through gross spills and 
carelessness. The first of these is 
sometimes thought to be the major 
concern, although the latter is likely 
to be the most difficult to predict 
and control. 

For the most part, the compounds 
applied to forests will be similar to 
those used elsewhere in properties 
such as persistence, toxicity, mode 
of decomposition, and fixation or 
accumulation by soil, and will be 
subject to similar precautions. In 
some instances, however, there may 
be special problems of forest use aris- 
ing from difficulties of precise appli- 
cation on rough terrain, or to coarse 
or rocky soils, or to the possibility 
of rapid, short-distance transport into 
streams — as, for example, after 
treatment of riparian areas. Further- 
more, the quality standards applied 
to headwater streams may well be 
more stringent than tolerated else- 

But in all this it should not be 
forgotten that by far the largest frac- 



tion of forested land is entirely un- 
treated with pesticides of any sort, 
and the greatest part of the remainder 
would be treated only at intervals 
of several to many years. For ex- 
ample, a single application of 2,4,5-T 
to control overtopping brush on re- 
generation areas probably would not 
be repeated within the life of the 
new stand. 

Numerous monitoring studies with 
insecticides such as DDT and its suc- 
cessor materials during the past two 
decades have demonstrated the mag- 
nitude of direct and secondary input 
into streams to be expected from 
broadcast aerial applications. These 
also indicate both the hazards of 
applying highly toxic or persistent 
materials in this way and the meas- 
ures required to avoid or minimize 
direct contamination of waters. Again, 
fewer though significant studies with 
ground and aerial applications of 
herbicides demonstrate that careful 
regulation of mode, rate, and season 
of application allows use even in 
streamside areas with no or minimal 
contamination. Since phenoxy and 
amitrole herbicides degrade fairly 
rapidly in the forest floor, confining 
application to places and seasons 
where overland flow will not occur 
within a month or two avoids possible 

But, plainly, continued systematic 
experiments with pesticides or other 
easily detected markers under a large 
variety of field conditions is needed 
to insure a high degree of predictabil- 
ity. Moreover, the increasing con- 
straint on the use of some materials 
is likely to place a high emphasis on 
development of nontoxic or easily 
decomposed materials, and on alter- 
native strategies of pest control. 

The Effects of Fire — Concentra- 
tions of dissolved solids in forest 
streams are normally low, and in- 
creases of any magnitude are usually 
associated with major disturbances 
or additions. From time to time 
concern has been expressed over the 
effects of fire, clearcutting or other 

destruction of cover, increased area 
of nitrogen-fixing vegetation, and 
forest fertilization. Unfortunately, at- 
tention is sometimes directed solely 
to maximum concentrations in the 
waters from the affected areas. When 
the aggregate of small watersheds 
forming a single forested drainage 
basin is viewed as a system over 
time, however, events affecting small 
areas and at long intervals, such as 
clearcutting in a sustained-yield for- 
est, necessarily have only minor in- 
fluences on the quality of large- 
volume streams issuing from the 
entire basin. In contrast, drastic 
large-area events such as a major 
wildfire or insect pandemic could 
increase outflow concentrations for a 
relatively brief period. 

Plant ash remaining after severe 
fires can temporarily raise the base 
content and alkalinity of streams 
from affected areas. Accelerated de- 
composition of organic matter in and 
on the mineral surface after fire may 
increase nutrient outflow, though this 
has not been demonstrated. These 
several changes are probably trivial, 
however, in comparison with more 
serious and long-lasting effects on 
water temperature, turbidity, and 
flow characteristics, especially if re- 
establishment of cover is long de- 

But fires are of many kinds, and 
forest landscapes vary enormously in 
susceptibility to post-fire erosion. 
Turbid streams, floods, and disastrous 
mudflows are well-known conse- 
quences of fire in the steep brush- 
lands of southern California. (See 
Figure VII-5) There are many such 
landscapes with highly combustible 
vegetation where uncontrolled fire is 
a major hazard to watershed values, 
including water quality. Well-docu- 
mented case histories, as well as 
small-scale experiments, thoroughly 
demonstrate the flood peaks, gulley- 
ing, sediment transport, and channel 
tilling, as well as long-term impair- 
ment of water quality following 
severe wildfires on sensitive soils and 
slopes. Hence, research on fire be- 

havior and control, fuel reduction 
prescribed fire, and wildfire detec- 
tion and suppression are essential to 
maintenance of water quality. This 
point is too often overlooked, and 
efforts at economic analyses or "total 
social costs" fail to weigh the proba- 
bility — and overwhelming damage — 
of major wildfires against the costs 
and minor damages of roads or other 
measures that facilitate fire control. 

Disastrous effects on water quality 
from wildfire are far from universal, 
however. In some places, wildfire 
may be followed by significant sur- 
face washing or mass movement but 
part or all of the sediment comes to 
rest and is stabilized before reaching 
the streams. Furthermore, there are 
large areas of stable soils and slopes 
that resist detachment and maintain 
adequate hydrologic capabilities even 
after severe fires. 

Much remains to be learned about 
soil and water behavior following 
fire, and especially about mass move- 
ment on steep or unstable slopes, 
about the possibilities of adverse 
precipitation events in the interval 
before revegetation of newly burned 
surfaces, and about seeding or other 
measures to hasten such revegetation. 
The sheer magnitude and obviousness 
of the immediate post-fire conse- 
quences, the costs and complexities 
of long-term studies on large burns, 
and concern with newer threats to 
water quality tend to divert attention 
from quantitative studies of recovery 

Nevertheless, present knowledge 
allows arraying likelihood and pos- 
sible extent of wildfire influences on 
a scale from none to very great, 
according to landscapes, fuel type, 
and fire characteristics. Such knowl- 
edge also allows use of prescribed 
fire, at times of low hazard, for a 
variety of purposes — preparation for 
regeneration, improvement of wild- 
life habitat, and, notably, reduction 
of accumulated fire fuels that would 
otherwise vastly increase wildfire 
hazards. In most of the southern 




The upper photograph shows an orchard near Santa Barbara, California. The single 
open storm drain was normally adequate to handle storm runoff. In 1941, however, 
the region enclosed by the dotted line was burned out in a brush fire. The lower 
photograph shows the debris deposited by the runoff of a single light rain after 
the fire. 

pine forests prescribed fire is actually 
a legacy from annual burning by 
Indian populations, and several stud- 
ies fail to show any deleterious con- 
sequences of its repeated use. Known 
or probable exceptions, however, are 
some areas of new forests planted 
on severely eroded lands, and some 
steep and sensitive soils. Again, some 
studies of "slash burning" for reduc- 
tion of logging debris in the Douglas 
fir region reveal that the soil cover is 

totally removed from only a small 
percentage of the burned area so 
that infiltration remains high and 
sedimentation negligible. But greater 
fire severity, or slopes on which mass 
movement occurs, increases the likeli- 
hood of soil movement into stream 
channels. Generally feasible alterna- 
tives to fire have not yet been 
found, but several interests — includ- 
ing smoke abatement, possible value 
of logging wastes, and fish manage- 

ment, as well as water quality per 
se — have encouraged such research. 

In some regions the predominance 
of alder and some other nitrogen- 
fixing shrub species can be increased 
by fires, disturbance, or silvicultural 
treatment. Stands of alders fix sig- 
nificant quantities of atmospheric 
nitrogen, and some fraction of this 
addition enters streams. The extent 
of such contributions and their even- 
tual effect on stream concentrations 
are unknown, except by order-of- 
magnitude estimates. However, these 
indicate that fixation per unit area 
over a period of some years must 
often exceed the nitrogen additions 
considered in forest fertilization pro- 
posals. Hence, consequences of these 
natural additions are of very con- 
siderable interest. 

Reduction of the forest cover by 
fire, wind, insects, and clearcutting 
causes an abrupt increase in surface 
temperatures and in mineralization 
of the organic matter. The resulting 
nutrient release may be followed by 
increased leaching of nitrates and as- 
sociated cations into streams. These 
effects are highly dependent on cli- 
mate and the quantity of surface 
organic matter, and on the rapidity 
with which a new cover of vegetation 
appears. The well-known studies at 
Hubbard Brook (New Hampshire), 
although artificial in some degree, 
served to focus attention on the 
maximum quantities of nutrients that 
may thus enter streams. Several other 
studies in regions of lesser organic 
accumulation and where natural re- 
vegetation is allowed, show only 
minor increases. A considerable 
number of experimental treatments 
and monitoring to study this effect 
further are now under way. 

Virtually no attention has been 
given to other forest management 
treatments which probably act in 
the same direction although at lower 
intensity. These are drainage of for- 
ested wetlands, broadcast burning, 
and site preparation by destroying 
vegetation and disturbing the soil. 



The exact magnitude will be highly 
variable, depending on soil and cli- 
mate. The effects of all such treat- 
ments on nutrient release, like those 
of clearcutting, are temporary, self- 
limiting, and not subject to recur- 
rence on the same area within the 
foreseeable future. Though these 
nutrient changes may be conse- 
quential for vegetation on the treated 
area, estimates suggest that any in- 
fluence on water quality must be 

The Effects of Fertilizers and Other 
Nutrient Sources — In recent years 
there has been a sharp increase in 
the number of experiments and op- 
erational trials using artificial fer- 
tilizers to increase timber growth and 
wildlife food supplies, and to develop 
protective vegetation on disturbed or 
eroded soils. Large-scale applications 
of nitrogen on timberlands, notably 
in the Pacific Northwest, have pro- 
voked concern that the added fer- 
tilizer would enter streams and lakes, 
increasing eutrophication and perhaps 
reducing quality of urban water sup- 

Several lines of evidence, including 
lysimeter studies on fertilized areas 
as well as the "clean-up" of sewage 
and other waste waters applied to 
forest soils, demonstrate that forest 
ecosystems are highly efficient col- 
lectors and "sinks" for added nutri- 
ents. The capacity of such sinks ap- 
pears great due to the large biomass 
low in nutrient content, wide carbon- 
nitrogen ratios of forest organic mat- 
ter, and the high phosphorus-fixing 
capabilities of most mineral soils; but 
the details are poorly known. Again, 
the possibility of increased nutrient 
content in soil and vegetation result- 
ing from fertilization has raised the 
possibility of greater release follow- 
ing timber harvest. Such questions 
point to the need for far more precise 
characterizations of the "compart- 
ments" and "fluxes" of ecosystem 
models before these can have any 
predictive value. 

Present knowledge of the fate of 
nutrients entering the soil indicates 

that the more serious source of water 
contamination would be direct entry 
of the applied fertilizers into streams 
and lakes. This might occur either 
through the distribution into such 
waters during aerial application, or 
in consequence of surface washing 
at some periods of the year. The 
latter chiefly concerns the borders of 
streams and the associated system 
of "temporary" streams where over- 
land flow mass occurs briefly at pe- 
riods when the underlying soil is 
saturated. The extent of such channel 
expansion and its role in transport 
of dissolved or fine suspended mat- 
ter has been generally overlooked. 

Thus far, however, the forest land 
managers involved have been highly 
sensitive to water-quality considera- 
tions and have withheld application 
of nitrogen fertilizers in the vicinity 
of lakes or streams. In consequence, 
the tolerable upper limits of rate and 
distribution are as yet unknown. But 
several studies of fertilized water- 
sheds and monitoring of streams 
from fertilized areas are already un- 
der way and will warrant continued 

Another important localized source 
of nutrient enrichment comes about 
through the high concentrations of 
recreational users at major camp- 
grounds, ski developments, and the 
like. Treatment of the human waste 
generated at such areas may or may 
not render the effluent waters "micro- 
biologically safe," but the nitrogen 
and often the phosphorus contents 
usually enter the streams. The result- 
ing nutrient load is susceptible to 
reasonably accurate determination, 
but the effects on the biology of 
headwater streams and the magnitude 
of such enrichment in comparison 
with other sources mentioned above 
certainly require study. This problem 
is only marginally a concern of "for- 
est management," but in the face 
of steadily increasing recreational de- 
mands the solutions are likely to be 
difficult or expensive. Among the 
options will be prohibition of such 
use, elaborate treatment plants or 

new technologies of waste 
or acceptance of altered water qu 
In any case, both the projection of 
recreational expansion and hydrolog- 
tcal data on the streams should be 
adequate for prediction of conse- 
quences when such recreational uses 
are being considered. 

Bacteriological Quality — Increas- 
ing recreational use is also a major 
threat to bacteriological quality of 
water from forested areas. Small 
numbers of hikers and workers, like 
small stock and wildlife populations, 
can use a large area without making 
much impact. But in forest areas 
heavily used by campers, hikers, or 
workers human waste treatment is 
commonly inadequate, primitive, or 
nonexistent, posing possible hazards 
to downstream users of untreated 
waters from such areas. Routine 
treatment offsets any such threats in 
urban distribution systems but the 
problem of reconciling health, aes- 
thetics, and recreational use remains. 

Sediment, the Pre-eminent Fac- 
tor — Concern with the varied as- 
pects of water quality, though nec- 
essary, sometimes deflects attention 
from sediment load, which is the 
major, most costly, and almost ubiq- 
uitous cause of impaired quality. Fine 
suspended matter, mineral or organic, 
as "silt" or "turbidity," imposes high 
treatment costs for urban and some 
industrial uses. It also clogs irri- 
gation ditches, destroys spawning 
grounds and bottom vegetation, and 
reduces recreational and scenic val- 
ues. Coarse materials fill channels 
and divert streams in flood, and 
often destroy the usefulness of 
flooded lands. 

Sediment movement into streams, 
together with flow rate and land- 
treatment effects, have been the main 
thrust of watershed research. As a 
result, the sources of fine and coarse 
sediments in forest watersheds are 
reasonably well known, as are the 
general relationships between sedi- 
ment production on the one hand, and 





"i r 


i r 

J I L_ 

10 20 




50 60 





The graph shows lines of best fit for measurements of sediment particle-size dis- 
tribution made from 1961 to 1964 in the Scott Run basin, Fairfax County, Virginia. 
There appears to be no change in particle-size distribution with time. The low-flow 
regimes show high concentrations of silt and clay. As the flow decreases and the 
speed slows, the silt particles — being heavier — drop to the stream bed, leaving the 
fine clay particles to become the greater portion of the load. As the flow increases, 
there is an increasing concentration of the larger, sandy particles. 

hydrological behavior and disturb- 
ance of vegetation on the other. (See 
Figure VII-6) 

On the majority of forest water- 
sheds, the principal cause of erosion 
and stream turbidity outside of flood 
periods is exposure or disturbance 
of the mineral soil surface. This may 
come about through any of a number 
of causes — excessive grazing, tram- 
pling by livestock or humans in large 
numbers, roads and skid-trail con- 
struction, and, as mentioned, some- 
times after severe fire. 

Current overgrazing and the legacy 
from even more severe overgrazing 

in the past poses severe problems in 
some low-rainfall forest areas of 
western United States. Reducing fur- 
ther damage by livestock, and occa- 
sionally by big game, is more of a 
political-economic problem than one 
of technical know-how. Repair of 
past damage, however, is handi- 
capped by the large area and low 
values of affected lands, the slow 
pace of natural recovery, and limited 
funds for both research and applica- 
tion of known principles. 

Increasing recreational uses — in- 
cluding human traffic on trails and 
campgrounds, development of roads, 

ski runs, facilities, and now the 
large numbers of off-the-road ve- 
hicles — create an array of new prob- 
lems for forest land management. 
Obviously, hazard to water quality 
is only one of these, though often 
significant. Less obviously, new kinds 
of use conflicts are being generated, 
and research in behavior and values 
is likely to be as important in ad- 
dressing these as is that in economics 
and watershed management. 

Contrary to popular belief, the 
mere cutting of trees, even com- 
pletely and over large areas, seldom 
leads to any surface erosion, espe- 
cially if regrowth occurs promptly. 
The critical factor determining 
whether logging operations will or 
will not influence stream turbidity 
is how the felling, skidding, and 
hauling are conducted. There is now 
a substantial body of research and 
experience in several forest regions 
demonstrating that the mechanical 
operations and necessary road con- 
struction can be carried on with 
minor or no impact on watershed 
values and stream turbidity. 

Several essential principles of road 
design, construction, and mainte- 
nance, as well as for protection of 
stream channels, have emerged that 
minimize soil exposure and arrest 
sediment transport. These principles 
are readily translated into practice 
in many landscapes, though the op- 
erational details and controls are 
known for only a few. In some steep 
mountains or slide-prone areas, how- 
ever, geological structure and topog- 
raphy impose unforeseen hazards and 
extremely high costs. Greater avail- 
ability of soil and geotechnical in- 
formation might reduce both, though 
the resources for providing informa- 
tion to large wildland areas are 
meager. In any case, cost factors as 
well as watershed considerations have 
dictated new attention to harvesting 
and transport systems, including the 
long-used aerial cable methods and 
feasibility tests with balloon and heli- 
copter logging. 



Hence, with the exception of 
fragile or very steep lands, our pres- 
ent levels of knowledge and tech- 
nology are generally adequate to 
minimize these sources of disturbance 
or reduce their consequences. This is 
true even though many elements — 
including lack of exact prescriptions, 
increased costs, momentum of exist- 
ing systems, and unawareness of 
long-run damages — may cause ac- 
tual practice to lag well behind the 
prospects demonstrated by research. 

Needed Scientific Activity 

As the foregoing indicates, a sub- 
stantial body of knowledge and 
application has been accumulated 
through "watershed" or "watershed 
management" research on forest 
areas. Extension of research results 
and at least qualitative predictions 
to similar landscapes can be made 
with some confidence. Greater cer- 
tainty, exactness, and extent of pre- 
dictions are possible simply through 
increased funding of existing re- 
search installations. Predictive mod- 
els and simulation relating streamflow 
to physical variables and precipitation 
are being explored by hydrologists. 
Success would bring extension to 
forest watersheds for which numer- 
ous data are available, and might 
call for new modes of examining 
factors controlling surface soil loss, 
bank erosion, or other sources of 

Nevertheless, even within current 
concepts, there are enormous gaps 
in our knowledge of watersheds. 
Many large areas are poorly known 
in terms of exact climatic data, soil 
units, and the hydrologic behavior 
or response of watersheds to treat- 
ment. In some instances, the simple 
conceptual models derived from study 
of soil in the laboratory or agricul- 
tural field bear little resemblance to 
the behavior of wildland soils, es- 
pecially those on very steep slopes. 
Much greater efforts at watershed 
characterization and in study of the 
actual functioning of small soil- 

geomorphic "systems" under field 
conditions are badly needed. Such 
work is not entirely lacking (see 
Figure VII— 7), but the investigators 
so employed are few and the number 
of mixed-discipline investigative 
teams far fewer, especially in the 
light of the large areas involved. 

Three examples illustrate such 
needs : 

1. Only within the last decade 
has it been recognized that fire 
on the steep California brush- 
lands not only destroys the 
protective cover of vegetation 
and litter but also imparts a 
non-wettable quality to the soil 
itself, apparently through con- 
densation of heat-volatilized 
substances from the litter. The 
result is reduced entry of rain- 
fall, increased surface flow, and 
erosion. This complexity has 

required new research 
proaches, and calls for revision 
of existing notions of infiltra- 
tion in both burned and pro- 
tected soils. 

2. Hewlett's variable source area 
concept of water outflow, al- 
luded to earlier, is still novel 
and its consequences for water 
quality are only now being ex- 
plored. In certain landscapes 
it seems to provide a mechan- 
ism for direct overland trans- 
port of surface materials to 
streams without passing 
through the soil filter, a pos- 
sibility usually overlooked. 

3. Again, assessments of land- 
scape stability, normal sedi- 
ment loads, and tolerance of 
man-made disturbance are com- 
monly based on short time 
periods and assumptions of 


Land use 

A. Natural forest or grassland. 

B. Heavily grazed areas. 

C. Cropping 

D. Retirement of land from 


E. Urban construction. 

F. Stabilization 

G. Stable urban 

Sediment yield Channel stability 

Low Relatively stable with 

some bank erosion. 

Low to moderate Somewhat less stable 

than A. 

Moderate to heavy _ Some aggradation and 
increased bank 

Low to moderate Increasing stability. 

Very heavy Rapid aggradation and 

some bank erosion. 

Moderate Degradation and 

severe bank erosion. 

Low to moderate Relatively stable. 

The table shows various land uses and their effect on the relative sediment yield 
from the surrounding landscape as well as on the stability of stream channels. The 
most severe sediment problems occur during urban construction, when covering 
vegetation is removed and the flow regime in channels is changed by realignments, 
increases or decreases in the flow, or obstructions placed in or alongside the natural 



gradualness. But the geologic 
processes that shape the steep 
lands are often violent and er- 
ratic. Landslides, avalanches, 
massive floods, and abrupt 
changes in stream cutting and 
deposit are normal incidents in 
the down-wearing of steep 
mountain slopes. Since hazard 
is often unsuspected and fre- 
quency is on a larger scale 
than laymen reckon with, such 
events often appear as "ac- 

cidents" or are attributed to 
the wrong causes. 

It is clear that man's activities in 
some susceptible landscapes decades 
and centuries ago have increased the 
frequency or severity of such events 
and triggered self-accelerating erosion 
of unstable slopes. Now, landslides 
and slips associated with road con- 
struction are a continuing problem 
as roads are extended into steep 
remote areas. Hence, there is need 

for much better understanding of 
soil and geomorphic processes on 
vulnerable steep lands with a view 
to characterizing hazards and devis- 
ing measures of avoidance or control. 
Such research concerns not only 
forest management operations but 
equally highway construction, ski- 
slope developments, powerline clear- 
ance, mining, and all other activities 
that change stream courses, slope 
loading, or the stabilizing effects of 

Factors Relating Forest Management to Water Quality 

Water derived from forested wa- 
tersheds is generally the highest- 
quality water found under natural 
conditions although, contrary to pop- 
ular opinion, water from pristine 
forest streams is frequently unsafe 
for human consumption. Under nat- 
ural conditions, water quality is a 
function of: 

Geology and Geochemistry — Par- 
ent materials and the products of 
their weathering influence mineral 

Topography — Elevation, exposure, 
and steepness influence the form of 
precipitation, time and mode of de- 
livery, evaporation rates, water tem- 
perature, infiltration opportunity. 

Climate — Climate influences or 
determines the amount and form of 
precipitation input and the time and 
mode of delivery of water; indirectly, 
it influences sediment and organic 
content, rate of weathering, soil de- 
velopment, and vegetative cover. 

Soils — Type and depth of soil 
mantle are significant factors in wa- 
ter quality determination, especially 
in surface water. They influence the 
rate and amount of infiltration and 
percolation and, consequently, quality 
and amount of groundwater recharge, 
the rate and amount of erosion, and, 
thus, the sediment and chemical con- 
tent of surface water. Soil influences 

biological activity and nutrient cy- 
cling processes and is a determining 
factor in type and density of vegeta- 
tive cover. 

Biota — Includes animal and plant 
forms. Animals, from soil bacteria 
and microorganisms to large wild- 
life forms, play a significant role in 
determining water quality. Similarly, 
vegetative forms from lowly mosses 
through forests exert an influence on 
water quality. These combined in- 
fluences include bacteria, nutrients, 
organic matter, and sediment or tur- 
bidity content, hydrogen ion activity, 
suspended solids, and water tempera- 

Natural Disturbances — Natural ca- 
tastrophes including forest fires, in- 
sect and disease depradation, earth- 
quakes, volcanic eruptions, landslides, 
avalanches, hurricanes, and tornadoes 
all influence water quality, often in a 
major way. 

The Role of Forests 

Forest vegetation influences and in 
turn is influenced by climate, soil de- 
velopment, geologic weathering, other 
biota, and natural disturbances. Ex- 
amples of some forest influences 
which directly or indirectly affect 
water quality include: 

1. An ameliorating influence on 
local climate leading to lower 

water temperatures and lower 
evaporation rates and also, usu- 
ally, to greater transpiration 
rates and higher production of 
atmospheric oxygen. 

2. A favorable influence in reduc- 
ing flooding levels, erosion, 
and consequent sedimentation 
production and turbidity in 

3. A favorable influence in the 
area of nutrient cycling; more 
nutrients are held in and on 
forest land. 

4. High production of organic 
matter may produce short-term 
discoloration, and sometimes 
odors, in surface water. At the 
same time, this organic material 
has a very favorable influence 
on biotic activity in and on soil. 

5. Forest vegetation, particularly 
deep-rooted types, tend to pro- 
vide optimum natural protec- 
tion against avalanching and 

6. Forests generally consume more 
water than other vegetation; 
thus, less total water may be 
available downstream for dilu- 

7. Forests tend to buffer highs and 
lows of streamflow volume and 
the quality of this water. 



Impacts of Forest Management 
on Water Quality 

Other than changes brought about 
by the (usually rare, except for forest 
fires) catastrophic natural disturb- 
ances over which we have little or no 
control, the major changes wrought 
in water quality from forested water- 
sheds are those resulting from man's 
activities. Major disturbances and 
and activities due to forest manage- 
ment and man's activities include: 
fire, forest clearing or removal, timber 
harvest, road and right-of-way con- 
struction, cultural operations, insect 
and disease control, solid waste dis- 
posal, and recreational activities and 

Forest Fires — Whether natural, 
deliberate, accidental, or incendiary, 
forest fires are generally conceded to 
have a deleterious effect on water 
quality. The degree of influence de- 
pends on the type and intensity of 
the fire, the time of year, and topo- 
graphic and soil conditions. Ground 
fires occurring on stable soils may 
produce only minimal deterioration 
in water quality, while intense fires 
on sensitive soils and on steep slopes 
may occasion serious damage. Effects 
on water quality may be due to in- 
creased water temperatures, increased 
ash, mineral, and organic content, as 
well as higher sediment and turbidity 
loads due to increased runoff and ero- 
sion. The effects may be restricted to 
a single season or year or they may 
last up to several decades. 

Fire used as a management tool — 
e.g., to effect deliberate ecological 
change, to control insects and disease, 
or for slash disposal — is ordinarily 
planned in areas and at seasons when 
damage to water quality would be 

Forest Clearing — Removal of for- 
est for agricultural land use, for urban 
or industrial development, or for vege- 
tative-type conversion (e.g., forest to 
grass) may completely alter the 
water-quality regime. Changes will be 
greatest during the period of maxi- 

mum disturbance. Following recov- 
ery, the water-quality regime will take 
on the characteristics of the new 
land-use pattern. In some cases — 
e.g., the conversion of pinyon juniper 
or chapparal forest types to grass — 
there may be an improvement in 
water quality from the sediment- 
turbidity standpoint. 

Timber Harvest — The effects of 
timber harvesting on water quality 
will depend on the intensity and type 
of harvest operation and on the man- 
ner of product removal. Light selec- 
tion cuts will normally have minimal 
or no effect, while clear cuts that open 
up large areas will tend to increase 
water temperatures and increase the 
potential for subsequent erosion and 
sedimentation. Contrary to popular 
belief, the removal of the forest crop 
itself ordinarily does not occasion 
serious damage except on very steep 
slopes or on unusually sensitive soils. 
The major damage is usually due to 
harvesting and removal methods — 
i.e., skid trails, log landings, heavy- 
equipment disturbance, and, espe- 
cially, road construction and inade- 
quate maintenance. On occasion, 
yarding areas or equipment servicing 
areas may provide a source of con- 
tamination as a result of oil, gasoline, 
or chemical spills. 

Road Construction — Road and 
right-of-way construction in forests is 
a major problem insofar as water 
quality is concerned. During and fol- 
lowing clearing and construction, 
substantial areas of raw roadbed and 
cut-and-fill slopes are exposed to ero- 
sion; frequently, large amounts of 
erosional materials are washed into 
stream channels. Damage can be sub- 
stantially reduced through road loca- 
tion, carefully supervised construc- 
tion methods, immediate rehabili- 
tation of exposed areas, and good 
maintenance practices. The same 
holds true for the construction of 
rights-of-way for power lines, pipe- 
lines, and waterways (surface or un- 

Cultural Operations — In addition 
to the harvesting process, intensive 

forest management may i; 
or more cultural operations such as 
forest thinnings and cleanings. When 
such operations are done mechani- 
cally, little or no impairment of water 
quality should result. However, when 
chemicals such as sodium arsenate 
or 2,4,5-T are applied, caution must 
be exercised to keep such materials 
away from streams. 

Insect and Disease Control — To 
protect commercial and noncommer- 
cial forests, wilderness, and recrea- 
tion areas as well as forest parks from 
periodic disease and insect epidemics, 
control operations are essential. The 
most effective and most economic 
control methods have involved chemi- 
cals such as DDT. The environmental 
dangers inherent in chemical control 
methods, including water-quality de- 
terioration, have become increasingly 
apparent and controls have recently 
been imposed. In some cases, con- 
trolled light ground fires in forest 
areas have been applied to destroy 
vectors. Such operations have little 
influence on water quality if applied 
carefully under controlled conditions. 
Ecologic controls also have little or 
no impact upon water quality. 

Solid Waste Disposal — In har- 
vesting timber crops as well as in 
the primary conversion (sawmilling), 
relatively large volumes of solid 
waste in the form of slash, slabs, and 
sawdust need disposal. To accelerate 
new forest development, to destroy 
breeding areas and food for forest 
insects and disease pests, and to en- 
hance the forest environment it has 
been a common practice to burn the 
forest slash. While such practices 
have only minimal effect on water 
quality, they are being halted in many 
forest areas due to air-pollution con- 
siderations. Similarly, at primary 
conversion plants there are major 
problems in the disposal of sawdust, 
slabs, and edgings. Again, fire has 
been used as a primary method of 
disposal but is now being drastically 
reduced due to air pollution. Some 
of this waste material is being used 



to produce secondary products such 
as compressed-sawdust fireplace logs. 

Recreation Activity and Develop- 
ment — Outdoor recreation activity 
and developments in forest areas are 
increasing many-fold each year and 
are contributing to water-quality 
problems. Some of the forest wilder- 
ness areas are now badly overused, 
and lack of sanitation facilities and 
overuse by horse pack trains as well 
as human trampling are locally lower- 
ing water quality. A major problem 
in many forest areas results from in- 
creasing use by four-wheel-drive ve- 
hicles and trail motorcycles which 
increase erosion and add to sediment 
problems in streams. Recreation de- 
velopments in the forest ranging from 
camp and picnic grounds and summer 
homes to large ski areas are fre- 
quently poorly designed or poorly 
maintained from the standpoint of 
sanitation; they, too, are contributing 
to water-quality degradation. 

Other Forest Uses — Special uses 
of forest land — such as grazing by 
domestic livestock, mining opera- 
tions, and summer colonies or com- 
munes of people living on forest 
areas — may contribute special prob- 
lems in water quality. In general, 
grazing by domestic livestock is de- 
creasing on forest lands; conse- 
quently, from this standpoint an 
improvement in water quality can be 
expected. In mining operations in- 
volving large-scale land, subsurface 
disturbance, and road construction, 

water-quality problems increase, 
sometimes markedly, both from the 
standpoint of erosion and attendant 
sediment production and in mineral 
content of both surface and ground 

Steps Needed to Improve 
Water Quality 

While the quality of water derived 
from forest lands is in general supe- 
rior to that from other types of land- 
scapes or land uses, there is degrada- 
tion in many areas. Action is needed 
to protect water quality where it is 
good and to improve that which is 
being downgraded. 

Water-Quality Standards — By fed- 
eral legislation each state has had to 
set water-quality standards. Unfor- 
tunately, in many areas the standards 
set for some streams are higher than 
natural, or "pristine," water. For vari- 
ous reasons, many states lack back- 
ground data on natural water quality. 
If realistic standards are to be set and 
observed, some additional monitoring 
of forested water-source areas is 

Application of Available Knowl- 
edge — In many instances, degrada- 
tion of water quality is due to lack 
of application of principles already 
known to us. More rigid require- 
ments can be written into timber sale 
and road construction and mainte- 
nance contracts and then enforced. 

Where sanitation facilities are inade- 
quate around recreation sites or sum- 
mer homes, forced improvement or 
closure can improve water quality. 
Closure or zoning of forest areas to 
specialized uses such as four-wheel- 
drive vehicles can be helpful. Re- 
duced use of sensitive wilderness 
areas or elimination of horse traffic 
in such areas is likewise an available 

Neio Research — In many in- 
stances, remedial measures will be 
conditioned by the availability of new 
research information. Examples in- 
clude: What is the human carrying 
capacity in parks and forest recrea- 
tion areas with respect to water 
quality? What type of chemicals, and 
in what concentrations, can be used to 
control insects, diseases, and weed 
species without impairment of water 
quality? What type and pattern of 
forest harvesting can be safely ap- 
plied? At what seasons of the year 
should we restrict forest use to pro- 
tect water quality? What type of 
mineral extraction activity is permis- 
sible and what kinds of safeguards 
are necessary? How can forest areas 
be used safely and beneficially in 
solid waste disposal — wastes from 
the forest itself (slash) and from in- 
dustries and municipalities? What is 
the impact of watershed management 
activity to increase water yields on 
the water-quality regime? What are 
the relationships between wildlife 
use and domestic grazing and water 



Global Food Production Potentials 

By the development and applica- 
tion of technology in food production 
the world can be well fed generally, 
even with its prospective doubling 
of population by the year 2000. The 
physical, chemical, biological, and en- 
gineering sciences must be used to 
develop production systems that will 
effectively utilize arable land, water, 
solar energy, energy from fossil fuel 
or other source required for mechani- 
zation of agriculture, improved seeds, 
livestock, fertilizers, pesticide chemi- 
cals and other pest-protection means, 
genetics, ecology, disease and para- 
site control in man and animals, social 
science relevant to industrialization 
of agriculture and urbanization of the 
world generally, and the building and 
use of scientific and technological 
capability in every country to meet 
its needs. 

A great deal of science basic to 
agriculture has happened because 
men wanted to find out why — why 
tillage was useful — why fallow was 
useful — why ashes stimulated new 
plant growth. Man learned by expe- 
rience; he knew even in ancient times 
that good seed, in good soil, well 
watered under a friendly sun pro- 
duced a good harvest. The major 
plant nutrients required have been 
known for more than a hundred 
years. Commercial manufacture of 
superphosphates began about 1850, 
although nitrogen did not become 
available in Germany until World 
War I and in the United States until 
1925. Mined potash and sulfur sup- 
plement natural reserves. 

Current Scientific Understanding 

The theoretical scientific basis of 
plant nutrition is an essential and 
major portion of the science basic to 
agriculture and world food produc- 
tion. Soils of the world vary widely 

in their reserves of major and minor 
plant nutrients. Some of them con- 
tain toxic amounts of such minerals 
as molybdenum or selenium. Others 
are very deficient. Amendment de- 
pends not alone on mineral analysis 
but also on the physical nature of 
the soil and its ion exchange capacity. 
The ability of the soil to produce 
crops must be assessed locally, often 

It has been estimated that there are 
potentially arable lands in the world 
equal in area to those now under 
cultivation — i.e., around 1.5 billion 

hectares. (See Figure VII-8) One 
of the recommendations of the Presi- 
dent's Science Advisory Committee 
on The World Food Problem was: 
"The agricultural potential of vast 
areas of uncultivated lands, particu- 
larly in the tropical areas of Latin 
America and Africa, should be thor- 
oughly evaluated." 

Water is a major factor in all food 
production. The science of hydrol- 
ogy, the technology of water manage- 
ment are basic to agriculture. Irriga- 
tion — with its concomitant problems 
of waterlogging or drainage, salinity 



in 1965 

of persons) 


in billions of 


Acres of 
land per 

Ratio of 
vated to 
arable land 











Asia .. 

. 1,855 






Australia and 
New Zealand 














North America 







South America 





















The table shows the total area of the continents of the world, the part that is po- 
tentially arable, and that which is presently being cultivated. The cultivated areas 
include land under crops, temporary fallow, temporary meadows, lands for mowing 
or pasture, market and kitchen gardens, fruit trees, vines, shrubs, and rubber planta- 
tions. The land actually harvested in any given year is about one-half to two-thirds 
of the total cultivated land. Of the potentially arable land, about 11 percent of the 
total requires irrigation for even one crop. It is important to note that Africa. 
Australia and New Zealand, and South America cultivate significantly less than 
half of their potentially arable land. The continents where most of the land is being 
used are those where the population density is greatest. 



or leaching — poses added problems 
for hydrologists and engineers. But 
these areas of science and technology 
are useless unless they are used in 
adequate systems of agronomy, in- 
volving knowledge of soil chemistry, 
soil physics, plant physiology, plant 
genetics, and soil-plant-water rela- 
tionships in every microclimate where 
crop plants are grown. 

Science basic to optimal use of 
solar energy and science basic to 
effective use of fossil fuel or other 
energy source in crop production, 
transportation, and storage and proc- 
essing of food crops is essential. In 
many countries, fossil fuel must be 
imported while human labor is in 
oversupply. Since a man is equivalent 
only to about one-eighth horsepower, 
it is difficult, if not impossible, to use 
enough human labor at the precise 
time when planting, harvesting, or 
cultivation is required. 

Crop-Plant Genetics and Breeding 

— Genetic capacity of crop plants 

and livestock species for the produc- 

tion of food useful and acceptable to 
man is a first requirement. Comes 
then the question of whether native 
plants and animals developed in and 
adapted to the many niches of a local 
ecosystem are better suited to serve 
man's needs there than those intro- 
duced from other places? 

The answer is that, for subsistence 
agriculture, the native varieties have 
many advantages. Natural selection 
over many generations has enabled 
them to survive the pests and com- 
peting organisms of their area of 
origin. But often this adaptation en- 
ables them to survive with only a 
meager excess for man's use. 

When man brings a new seed from 
a far place, it often fails in the new 
location; but not always. If it hap- 
pens to be adapted to the new loca- 
tion it may thrive there in the ab- 
sences of the diseases and pests it 
has left behind. Thus, sunflowers 
thrive in Hungary and the Ukraine 
while they are little exploited in their 
native Kansas, where they are weeds 

beset with many enemies. So, too, 
soybeans thrive in Illinois — far from 
their native China. Figure VII-9 
shows two other transplanted species. 

Selection, sometimes rather simple 
phenotypic selection, has developed 
crop plant variants used in various 
parts of the world that are often pre- 
ferred for organoleptic quality though 
inferior in productivity. "Baking 
quality" in bread wheat is not useful 
in macaroni wheats, for example. 
Phenotypic selection continues to be 
an important crop-breeding tool. 

Science basic to plant breeding has 
contributed (a) controlled methods of 
hybridization that have added yield 
to some crop plants, especially maize; 

(b) dwarfism, which has made possi- 
ble dramatic yield increases through 
response to heavy fertilizer and water 
applications without lodging, espe- 
cially in rice, wheat, and sorghum; 

(c) genetic disease resistance, espe- 
cially resistance in wheat to rust; and 

(d) selective breeding for photoperiod 


I Area of Origin 

JHH| Area of Transplanted Species 

| Area of Origin 
II Area of Transplanted Species 




The map shows the area of origin of coffee (Coffea arabica) and hevea rubber 
(Hevea brasiliensis) and the areas where, having been transplanted, they are now 
principally cultivated. In its place of origin, coffee is subject to native red rust 
(Hemilaea vastatrix), whereas in the New World, no native diseases exist. Hevea 
rubber is found in the New World only in the wild. In the Old World, where major 
production takes place today, there are no native pests. 



response suited to latitude, especially 
important in such crops as soybeans, 
maize, and wheat. 

Each country must have capability 
for continued breeding improvement 
of the crop plants it produces. Plant 
pathogens, for example, often de- 
velop new strains virulent to plants 
genetically resistant to old pathogens 
within a new crop plant generation. 

Animal Science — Aside from the 
relatively few true vegetarians in the 
world, who abstain from milk and 
eggs as well as from flesh, animal 
protein foods are status foods. Elas- 
ticity of demand for animal protein 
foods in the developing countries, in 
terms of consumer income, is very 
high. As income permits, these peo- 
ple will demand and obtain larger 
amounts of animal protein foods. 

While this demand may divert 
some cereals from human to animal 
food, most animal protein foods in 
the developing countries are and will 
continue to be produced from forage 
and milling offals and other products, 
including garbage, rejected as human 
food. There is, therefore, a very real 
need for the development of research 
and technological capability based on 
the animal sciences in all countries of 
the world. 

Among the principal problems re- 
quiring attention is research and tech- 
nology for the control and eradication 
of animal diseases, parasites, and the 
arthropod and other vectors of some 
of the major diseases of animals and 
man. An abbreviated list of the prin- 
cipal diseases would include foot-and- 
mouth disease, rinderpest, bovine 
pleuro-pneumonia, East Coast fever, 
African horse sickness, encephali- 
tides, African swine fever, malaria, 
trypanosomiasis, and schistosomiasis. 
Schistosomiasis is a major restraint 
on the full realization of the benefits 
of irrigation in tropical countries. The 
snail intermediate host of this para- 
site thrives in irrigation ditches. Two 
hundred million people are afflicted. 

Research is developing, or has de- 
veloped, control methods for all the 
diseases listed. Immunization, isola- 
tion, and vector control are all im- 
portant for one or more of them. 

Large game herbivores seem to be 
genetically resistant to, or tolerant of, 
some of these diseases. Research on 
propagation and management of such 
species may give new sources of ani- 
mal food. 

Fisheries as Food Sources — There 
is a very wide area of fisheries biol- 
ogy, culture, and engineering essen- 
tial to the scientific basis for world 
food production. Quantitatively, fish- 
eries constitute and have potential 
for only a minor portion of the 
world's food needs. However, in 
many nations they represent a quali- 
tatively excellent and preferred source 
of protein and concomitant minor 
nutrients essential to human health 
and well-being. Methods of harvest, 
preservation, and processing of ma- 
rine and estuarine fish and shellfish 
and methods of culture and propaga- 
tion of estuarine, coastal, and anad- 
romous species can protect and in- 
crease these sources of high-quality 
human food. 

In many countries, including our 
own, pond culture of carp, trout, cat- 
fish, crayfish, frogs, and other edible 
fresh-water species have a substan- 
tial potential for increasing supplies 
of preferred, high-quality protein 

Beneficial eutrophication — utiliz- 
ing animal wastes as nutrients in 
controlled aquatic ecosystems — of- 
fers substantial potential for increas- 
ing food production, recycling wastes, 
and enhancing the quality of the en- 
vironment. Knowledge of fish and 
shellfish nutritive requirements, their 
reproductive requirements, their dis- 
eases and parasites, toxins and con- 
taminants, both chemical and biologi- 
cal are areas needing research and 
technological, institutional, and per- 
sonnel capability in many countries. 

Arctic and antarctic food produ 
tion might be increased by national 
and international management of the 
harvest of food species and regula- 
tion of numbers of competing non- 
food species. 

Food Protection — Achievement of 
the important objective that our food 
supply shall be safe and wholesome 
requires a basis in many sciences and 
a highly varied set of technological 
capabilities that must be available in 
every country. 

Among the principal problems are: 
material toxicants (alkaloid and 
others); mycotoxins, resulting from 
certain strains of mold, potent in 
parts per billion, carcinogenic in test 
animals; botulinus toxin — food-poi- 
soning organisms such as Salmonella; 
insect infestations; and spoilage or- 

Protection by controlled environ- 
ments, chemicals, cold, and steriliza- 
tion requires intimate knowledge of 
the physical and chemical nature of 
food products and the effect of meth- 
ods of protection on nutritive and 
functional value and on safety and 

In India, the National Council of 
Economic Advisers has estimated that 
insects take 15 percent of the stand- 
ing crop and another 10 percent after 
it is harvested and stored. Losses 
from rats are also severe both in 
fields and storage bins. Use of plant- 
protection chemicals increased from 
six million acres in 1955 to a current 
200 million acres. 

New Directions for Science 

The world is principally dependent 
for its food supply on a very small 
number of crop and livestock species. 
Wheat, rice, rye, barley, oats, sor- 
ghum, maize and millet, sugarcane, 
sugar beets; potatoes, taco, cassava, 
sweet potatoes; soybeans, cowpeas, 
beans, and peas; vitamins, in variety, 
a little protein of fair quality from 



cole crops and other green and yellow 
vegetables, from fruits and nuts, 
cattle, buffalo, sheep, goats, pigs; 
chickens, turkeys, ducks, and geese. 

Research is heavily concentrated on 
crops of commercial importance. Re- 
search on such crops and their com- 
mercial production does not help the 
subsistence farmer who must trade 
his small surplus for the necessities 
of life — salt, needles, cloth — that 
he is unable to produce. We need 
social science to guide us to the as- 
similation of the subsistence farmer 
into commercial agriculture or to 
urban industry. Until recently, ap- 
plied research in most developing 
countries was poorly financed and 
completely lacking in relevance to the 
problems of local farmers. Even 
where research was directed at pro- 
ducing practical results, it was gen- 
erally concentrated on cash crops for 
export rather than on basic food 

It is not enough to produce high 
yields of nutritious grain. In India, 
prices for fine-grain rice from old, 

low-yielding native varieties are vir- 
tually unrestricted while prices of the 
coarser high-yielding varieties are 
controlled. Total production is re- 
duced by diversion of acres from 
high-yielding to low-yielding varie- 
ties. The affluent pay for what they 
want; the poorer consumers become 
dependent on rationed supplies of 
low-quality grain. 

The Institutions — Industrialized 
nations of the world have — in in- 
stitutions widely varying in structure 
— produced, taught, and applied the 
scientific information that is the basis 
of agricultural technology. In the 
United States, federal-state coopera- 
tion among the U.S. Department of 
Agriculture (USDA) and state agri- 
cultural experiment stations in each 
of the states provides a useful means 
of coordinating research, teaching, 
and service. Agricultural research has 
had the objective of producing results 
useful in improving the productive 
capacity of the land, the efficiency of 
crop, livestock, and forest production, 
the use of agricultural products, and 
the welfare of rural people. 

This system, while close-knit, is 
not closed. Inputs from all the sci- 
ence of the world and important con- 
tributions to it are commonplace. 
Shall at Princeton, East at Harvard, 
and Jones at the Connecticut Agri- 
cultural Experiment Station at New 
Haven all contributed to the scientific 
basis on which hybrid corn was de- 
veloped. But so, too, did a hundred 
others in USDA and the state agri- 
cultural experiment stations who 
painstakingly identified and modified 
the genetic stocks and the ways in 
which they could be used effectively 
in producing commercial seed for 
every latitude in which corn is grown. 

Developing nations must have their 
own institutions for agricultural 
teaching, research, and service. They 
emulate the model on which our Land 
Grant College system was conceived. 
They may find other organizations 
better suited to their needs. In any 
case, they must have institutions of 
their own to produce, teach, and ap- 
ply the science and resultant technol- 
ogy basic to efficient agriculture in a 
coordinated manner. 

The Hazard of Drought 

In most of the world, where men 
till the soil or graze animals, drought 
is a recurrent phenomenon. Given 
the preponderance of agriculture as a 
source of livelihood in the world, 
drought emerges as the major natural 
hazard of geophysical origin for man 
in terms of areal extent and numbers 
of population affected, if not in the 
intensity of harmful effects. Because 
it is a recurrent phenomenon, human 
adaptation or adjustment becomes 
possible. Indeed, most agricultural 
systems involve some adaptation. 

This statement takes as its starting 
point a human ecological context for 
the discussion of drought adaptation, 
illustrates the process of adjustment 
with two examples from widely dif- 
fering societies, and concludes with 

suggestions for the development of 
certain lines of scientific endeavor 
that promise to broaden the range of 
drought adjustment available to agri- 

What is Drought? 

In this ecological context, drought 
is defined as a shortage of water 
harmful to man's agricultural activi- 
ties. It occurs as an interaction be- 
tween an agricultural system and 
natural events which reduce the water 
available for plants and animals. The 
burden of drought is twofold, com- 
prising the actual losses of plant and 
animal production and the efforts 
expended to anticipate drought, and 

to prevent, reduce, or mitigate its 

Several important concepts follow 
from this definition of drought. First, 
for the purpose of this statement, only 
agricultural drought is being exam- 
ined; plant-water relationships that 
affect, for example, watershed yield 
are not considered. Second, drought 
is a joint product of man and nature 
and is not to be equated with natural 
variation in moisture availability. 
Natural variation is intrinsic to natu- 
ral process and only has meaning for 
man in the context of human inter- 
action. Third, the measurement of 
successful adaptation is in the long- 
term reduction of the social burden 
of drought, not simply in the increase 
in agricultural yield. The scientific 



effort required to improve human 
adaptation to drought must meet the 
same standards of efficacy, technical 
feasibility, favorable cost, and social 
acceptance that should govern any 
adaptive behavior. 

Farmer Adaptation to Drought 

In at least three parts of the world, 
the problem of human adaptation to 
drought is under continuing, inten- 
sive study. Saarinen has studied 
farmers' perceptions of the drought 
hazard on the semi-arid Great Plains 

of the United States; Heathcote has 
studied pastoral and agricultural 
farming in Australia; and Kates and 
Berry have carried out pilot studies 
of farmer perception among small- 
holders in Tanzania. By way of illus- 
tration, the work of Saarinen and 
Kates can be compared directly, using 
farmer interviews from comparatively 
dry areas of the respective countries. 
The focus in Figure VII-10 is on 
actions, on alternative adjustment 
strategies to reduce drought losses. 

The two studies were carried out 
quite independently; therefore, it is 



If the rains fail, what 

can a 

man do? 


No. of 



Do nothing, wait. 



Rainmaking, prayer. 



Move to seek land, 
work, food. 



Use stored food, saved 
money, sell cattle. 



Change crops. 






Change plot location. 



Change time of 


Change cultivation 






Adjustments per farmer = 1.07 



If a meeting were held and you were 
asked to give suggestions for reducing 
drought losses, what would you say? 

No. of 




No suggestions 16 


Rainmaking, prayer. 2 


Quit farming. 1 


Insurance, reserves, 
reduce expenditures, 
cattle. 16 


Adapted crops. 2 


Irrigation. 46 


Change land character- 
istics by dams, ponds, 
trees, terraces. 26 


Optimum seeding date. — 


Cultivation: stubble 
mulch, summer fallow, 
minimum tillage, cover 
crops. 78 


Others. 7 



Adjustments per farmer = 2.02 

The table shows the replies received from farmers in Tanzania and the United 
States when questioned about what they were willing to do in case of drought. 
Some 131 farmers in Tanzania and 96 in the U.S. were queried. In Tanzania, 
farmers mentioned an average of only one possible adjustment whereas U.S. farmers 
could think of an average of more than two to overcome the drought problem. 

of considerable interest th I 
from differently phrased questions 
are comparable. The available 
ceived strategies for mechanized U.S. 
grain farmers are not intrinsically 
different from those of hoe-cultivator 
Tanzanians. The mix of perceived 
adjustments differs, however — more 
actions in total being proffered by the 
U.S. farmers, more of these related 
to farm practices, and more of these 
requiring high-level technological in- 
puts. Tanzanian farmers seem more 
inclined to pursue adjustments not 
directly related to agricultural prac- 
tices, and thus are more prepared to 
change their livelihood pattern than 
to alter their specific cropping be- 
havior. Thus, the major contrast 
that emerges is between a flexible 
life pattern with an unchanging agri- 
cultural practice as opposed to a more 
rigid life pattern with an adaptive 
agricultural practice. These behav- 
ioral patterns are suggestive of either 
alternative perceptions of nature it- 
self or of opportunity for mobility. 
The Tanzanian farmer seems willing 
to move with an uncertain nature; 
his American counterpart appears 
readv to battle it out from a fixed site. 

Broadening the Range of Available 
Adaptive Behavior 

A farmer or rancher faces the re- 
current, often perennial choice of 
plant or grazing location, of the tim- 
ing of plant and cultivation, of the 
appropriate crops or stock, and of 
methods of cultivation and grazing. 
In seeking to broaden the agricul- 
turist's range of choice of drought 
adjustment, the scientist offers his 
usual and somewhat paradoxical 
knowledge: We know more about 
plant-water relationships than seems 
evident from the application of our 
knowledge; but we know less about 
these relationships than we need to 
know in order to apply the knowl- 
edge widely. 

Data Base — - We could now pro- 
vide for many parts of the world 
much improved information on which 



to base these decisions. To do so we 
would need to bring together the 
scattered record of climate, the frag- 
mentary knowledge of soil, the dis- 
persed experience with varieties and 
breeds, and the complex measure- 
ments of the impact of cultivation or 
grazing practice on available soil 
moisture. Within a framework of 
water-balance accounting, simulated 
traces of climatic data can provide 
probabilities of moisture availability 
directly related to specific varietal 
needs or stocking patterns. If these 
probabilities are used as appropriate 
weights in programming models, crop 
yields may be balanced against 
drought risk, desirable planting times 
determined, or the role of labor- 
or capital-intensive moisture-conserv- 
ing practices assessed. 

A special role for the use of such 
data is for the planned agricultural 
settlement. Wherever men are in- 
duced to move to new, often strange 
environments, greater drought risks 
are often incurred as a function of 
their ignorance. The dust bowls of 
the American West, the Virgin Lands 
of the Soviet Union, and the Ground- 
nuts Scheme of colonial Tanganyika 
provide tragic evidence of the uni- 
versal cost of learning about new 
environments even with, or perhaps 
because of, the application of consid- 
erable technology. Thus, much might 
be done for both the indigenous and 
pioneer agriculturalist through the 
assemblage of the available data base, 
through the identification of missing 
information by systems analysis, 
through the filling of critical gaps by 
experiment and field research, and 
through the distillation of the final 
product in such form as to provide 
meaningful answers to the perennial 
questions of farmers, ranchers, and 
planners be they peasant or agro- 
industrial producers. 

Water-Saving Cultivation — A 
number of the critical gaps in our 
knowledge have already been identi- 
fied. For example, data on water- 
yield relationships in less than opti- 
mal conditions are difficult to obtain. 

We know for most plants how much 
water they need to survive and how 
much water they can use if water is 
readily available, but we know little 
about the trade-off between these two 
points. The breeding of new varieties 
has, to date, seemed to require more 
rather than less water for the high- 
yielding varieties; there seems little 
widespread exploration in breeding 
of the balance between yield and 
water need. 

Though some water-saving cultiva- 
tion methods are widely practiced, 
the actual effects of some measures 
are disputed, partly because these 
effects seem to vary greatly with soil, 
slope, rainfall, and cultivation prac- 
tice. For example, tie-ridging, a wa- 
ter-conserving practice in semi-arid 
tropical areas has a very mixed effect 
depending on the crop, soil, slope, 
and pattern of rainfall encountered. 
The proper timing of planting or 
grazing requires much more analysis. 
The probability of below-average 
rainfalls that might lead to drought 
is calculated in certain standard ways, 
usually involving the assumptions 
that rainfall events are independent 
and that the relative frequency or 
some mathematical isomorphism of 
historic events provides useful prob- 
abilities of future expectation. But 
neither of these approaches ade- 
quately forecasts the persistence of 
below-normal rainfall characteristic 
of drought conditions in temperate 
areas or the monsoonal delays asso- 
ciated with drought in tropical areas. 
Forecasts of persistence require 
knowledge of the climatic mechan- 
isms associated with the phenomenon 
and forecasts of monsoonal delay re- 
quire understanding of the associated 
weather systems. 

Irrigation — For a considerable part 
of the world, irrigation represents a 
crucial drought adaptation. But ir- 
rigation efficiency is notoriously low; 
the amount of water wasted prior to 
field application from conveyance, 
seepage, phreatophytes, or in misap- 
plication is very high. For all of 
these sources of water loss, the po- 

tential contribution from applied re- 
search is great. 

Nevertheless, in many parts of the 
world, water availability is far in 
advance of water utilization because 
farmers are slow to adopt the new 
system. It is with irrigation, as with 
the adoption of new hybrids or in the 
choice of any new adjustment, that 
the social sciences have a special 
role in bridging the technical isolation 
that characterizes much research and 
development and in placing such 
efforts into the ecological matrix of 
farmers' life styles, agricultural sys- 
tems, and socio-institutional settings. 
For many farmers, acceptance of ir- 
rigation literally means the accept- 
ance of a new way of life. Thus, the 
question is still wide open as to which 
farmers make the best settlers for 
the great new irrigation projects now 
on the drawing boards of many de- 
veloping countries. Or consider the 
achievements of the Green Revolu- 
tion. We are told that the rapid 
adoption of high-yielding rice and 
wheat, particularly in South Asia, 
will give needed breathing space in 
the critical Malthusian struggle for 
survival. But we are warned that 
such adoption comes at a cost of 
further stratifying rural society and 
intensifying existing trends that cre- 
ate classes of prosperous landowners 
and landless rural workers. An even 
more complex social interaction is 
found among farmers on the shores 
of Lake Victoria who seem to be 
shifting from drought-resistant millet 
to bird-resistant maize because their 
children, who formerly stayed in the 
fields at harvest time to protect the 
crops from bird pests, are now in 

All of the foregoing, the propen- 
sity to adopt innovations, rural class 
stratification, even bird pests, are 
factors capable of analysis, if not 
solution, within a framework of hu- 
man ecological systems analysis. But 
just as plant breeders have had to 
develop strategies of genetic change 
and varietal development capable of 
providing new strains quickly, so 



must social scientists begin to de- 
velop analytic frameworks capable of 
accepting varied data and providing 
better, if not the best, answers. 

Priorities for Scientific Effort 

Priorities for scientific effort de- 
signed to broaden the range of choice 
available to those who are subject 
to recurrent drought can be listed as 

1. The assemblage and analysis 
of existing data in a systems 
context and its preparation for 
use in such form as to help 
answer the agriculturists' pe- 
rennial questions: where, what, 
how, and when to plant or 

2. A review of the relationship 
between the development of 

high-yielding varieties and their 
moisture requirements, with a 
view to developing cereal 
grains combining drought- 
resistance and higher-yielding 

3. A search for simplified forms 
of systems analysis or critical- 
path analysis capable of iden- 
tifying crucial obstacles, needs, 
niches, and interactions in agri- 
cultural systems related to 
broadening the range of 
drought adjustment. 

4. Improvement in the efficiency 
of irrigation water use. 

5. Review and analysis of existing 
dry-land cultivation methods 
with a view to improvement 
and wider dissemination of 

moisture-conserving tech- 

Research on climatic and 
weather systems is designed to 
provide better forecasts of per- 
sistence in temperate areas and 
monsoonal delay in tropical 

The thrust of these suggestions 
is in application, to make more use 
of what is already known through 
synthesis and systems analysis or 
simply scientific review, to seek a 
marked advance through social sci- 
ence technique in the adoption of 
what we already know, and to seek 
selected new knowledge where the 
gaps in existing knowledge are great 
or the opportunities seem particularly 





Trophic Dynamics, with Special Reference to the Great Lakes 

Trophic dynamics is that kind of 
ecology which concerns itself with 
energy flow through the component 
organisms of an ecosystem. The ul- 
timate source of energy for any living 
system is, of course, the sun. Green 
plants, converting the sun's radiant 
energy into chemical energy, are said 
by ecologists to constitute the first 
trophic level within an ecosystem. 
All photosynthetic plants, regard- 
less of systematic affinity, are thus 
grouped together by ecologists be- 
cause they all perform this same 
basic function. 

Animals that subsist largely by 
eating green plants constitute the 
second trophic level, be they aphid or 
elephant. Their energy source is 
once-removed from the initial fixation 
of radiant energy. Although animals 
of this trophic level are referred to 
by ecologists as "primary consum- 
ers," the lay term "herbivore" is of- 
ten useful. Carnivores that prey 
largely upon herbivores of any sort 
constitute the third trophic level. 

There are usually no more than 
five trophic levels in an ecosystem 
because the inevitable loss of energy 
in the shift from one trophic level 
to the next higher means that the 
total energy contained in the bodies 
of organisms on the fifth trophic 
level is small relative to the first. 
This relatively small amount of en- 
ergy at the top level is disposed 
into a small number of large and 
usually widely dispersed bodies, since 
there is a tendency for the predators 
at the top levels to be larger than 
their prey. (See Figure VIII— 1) 

While the fifth level is often 
reached in marine ecosystems, in the 
Great Lakes it is not. Large lake 
trout feeding upon fish would be the 
top predators in the open-water com- 

munity. They operate on the fourth 
trophic level. Smaller lake trout often 
subsist largely on small crustacean 
herbivores; they would be assigned 
to the third level. Roughly speaking, 
about half of the living material in 
a large lake at any one time resides 
in the tiny cells of the numerous 
photosynthetic algae — the first level. 

In lakes as large and deep as the 
Great Lakes, the overwhelming pre- 
ponderance of life is found in the 

open waters — away from the shore 
and bottom. Yet it is still desirable 
to refer to this assemblage of life 
in the open waters as a "community," 
not an "ecosystem," because the open 
waters lack full representation of 
still a different trophic category — 
"reducers." Reducers is the term 
ecologists apply to the variety of 
bacteria and fungi that derive their 
energy from the complex molecules 
in the dead bodies and feces of other 
organisms of the system. Energeti- 


The figure illustrates an ecological pyramid showing various trophic levels. The 
higher the step in the pyramid, the fewer the number of individuals and the larger 
their size. In some environments, large animals circumvent some of the levels 
in the food chain. For example, man takes from all levels below himself, including 
that of the producers. 



cally speaking, this biological reduc- 
tion is excessively wasteful, but the 
small molecules that result from this 
degradation can be utilized by the 
photosynthetic plants, and thus re- 
enter the trophic levels discussed 
above. It must be stressed, however, 
that a not inconsiderable amount of 
reduction of dead algae and the 
abundant feces of the animal plank- 
ton occurs as these sink slowly 
through the depth of the water. 
There is thus a recycling of biologi- 
cally active elements within the 
water body itself, not dependent 
upon the seasonal recurrence of full 
vertical circulation and the cool- 
season reintroduction of the accu- 
mulation of the products of reduction 
on the bottom back into the open- 
water system of temperate lakes. 

Contrasting Trophic Dynamics in 
Terrestrial Systems — Comparison of 
some of the basic attributes of the 
open water of a great lake, or of 
the ocean itself, with those of a well- 
developed terrestrial system such as a 
forest reveals some basic dissimilar- 
ities. The general features of trophic 
dynamics sketched at the outset ap- 
ply, of course, with equal validity 
to terrestrial and aquatic systems. 
The dissimilarities arise from the 
differences in the structure of the 
dominant green plants. 

Individual producers of the forest 
attain great size, each striving to 
spread its photosynthetic apparatus 
so that it may be fully exposed to 
the sun, unshaded by its neighbors. 
The trunk and branches by which 
each forest tree maintains its leaves 
in the sun provide, in the aggregate, 
a rigid three-dimensional framework 
in relation to which the other or- 
ganisms of the system dispose them- 
selves. The leaves are the food source 
for aphid and caterpillar, sloth and 
deer, tapir and gorilla. The per- 
manent woody plexus has made it 
possible for this variety of sizes of 
herbivores to evolve, each achieving 
a different way of exploiting the 
same food resource but each small in 

size compared with the green plant, 
some part of which each consumes. 

How differently the photosynthetic 
apparatus is disposed in the Great 
Lakes! Here the individual plants 
are tiny — microscopic solitary algal 
cells or clumps and colonies just vis- 
ible to the unaided eye (or, when 
dead or moribund, evident to both 
eye and nose as floating scum). The 
principal herbivores in the open wa- 
ters are small crustaceans, large com- 
pared to the individual algal cells 
that constitute their major food, but 
often too small to cope with large 
clumps of algal cells. 

To photosynthesize, the algae must 
be in the upper, lighted water layers. 
Under ice, algae are often concen- 
trated at the very top of the water, 
but in warm seasons they are swept 
around in the Langmuir spirals in- 
duced by the wind moving over the 
water's surface. When wind is 
strong and temperature low, the 
spiral currents may carry the algae 
too deep for adequate light to pene- 
trate. But during the warmer half 
of the year the myriad cells of the 
phytoplankton are slowly spiralled 
through the well-lighted, warmer 
layer of lake water. Quite unlike the 
forest situation, the green plants of 
the open water display no semi- 
permanent, three-dimensional pattern 
of structure in relation to which ani- 
mals can orient themselves and 
evolve special behavior patterns. The 
open waters provide no place to hide! 

One reason to stress the differences 
between these two kinds of commu- 
nities is that man, the observer, is 
primatively a member of a forest or 
grassland community, and some ecol- 
ogists have too much betrayed their 
experience of the forest in their in- 
terpretation of the dynamics of open- 
water systems. A part of this dif- 
ficulty of interpretation has been the 
tendency to expect, in essentially 
structureless open-water systems, the 
same kind of fine-grained adjust- 
ments of organism to environment 

that have evolved in the substratum- 
dominated terrestrial systems. 

Man-Induced Disturbances — The 
nature of the dynamic model of rela- 
tionships within the open-water com- 
munity of the Great Lakes is of more 
than academic concern. Man has 
seriously disturbed the biotic prop- 
erties of these lakes by his multi- 
farious activities. If the quality of 
these lakes is to be improved and 
continuously maintained at an im- 
proved level, a correct and complete 
understanding of the ecological inter- 
relationship is required. 

The overgrowth of the algae in 
Lake Erie is probably the most ob- 
vious manifestation of the disturb- 
ances that the biological communities 
of all the lakes have sustained to 
varying degrees. An algal over- 
growth, or, in ecologists' terms, an 
increased standing crop of the phyto- 
plankton, is a characteristic recent 
manifestation of lakes in Europe and 
North America on the shores of 
which large concentrations of human 
populations reside. 

The biological waste produced by 
the people of cities is biologically 
reduced, to varying degrees, into 
small molecules of biologically active 
elements such as nitrogen and phos- 
phorus. When these are flushed into 
lakes directly, or into their tribu- 
taries, they augment the natural sup- 
ply of plant nutrients. 

This "cultural enrichment" of lakes 
is cumulative. Once the simple com- 
pounds of nitrogen and phosphorus 
enter the lake in solution, they are 
quickly and effectively taken up by 
the green plants — the phytoplank- 
ton as well as the rooted water plants 
along the shore. Henceforth, these 
elements will reside in the complex 
molecules of organisms. They spend 
but little time in solution in the lake 
water; the amount of nitrogen and 
phosphorus that will escape through 
a lake's outlet, dissolved in the water, 
is remarkably small compared to that 



leaving the lake in the tissues of 
emerging insects or organisms other- 
wise removed from the lake. 

Approaches to Quality 

Management of lakes to maintain 
quality seeks two goals, both of 
which involve maximizing the rate 
at which the energy-rich compounds 
of nitrogen and phosphorus fixed in 
algae are passed to higher trophic 
levels. One goal is to reduce the 
standing crop of phytoplankton, 
thereby making the water more trans- 
parent; the second is to find an eco- 
nomical way to remove nitrogen and 
phosphorus from the lakes. 

The third trophic level in the open- 
water community is the lowest at 
which nitrogen and phosphorus are 
concentrated into packets of a size 
that man can manipulate and use. 
These "packets" are the bodies of 
the fish that eat the animal plank- 
ton; they can be fished from the 
lake and used directly as human 
food (as lake whitefish once were in 
large amounts) or they can be used 
as a protein source for animal nutri- 
tion (as alewives can be). 

We began this discussion of man- 
generated changes in lakes by sug- 
gesting that our conception of trophic 
dynamics within the open water is 
crucial to attempts to redress some 
of these biological imbalances. There 
are two alternate concepts of these 
relationships (to be sketched below). 
They differ in their relevance to 
achieving the two management goals 
set out above. The more recent for- 
mulations stress the role of predation 
by plankton-eating fish in control- 
ling the species composition of the 
plant and animal plankton. This con- 
cept offers hope that the two goals 
are not only compatible but might 
be achieved by the same manipula- 
tions of the system. On the other 
hand, the older concept — which 
stresses competition within a trophic 
level as the prime determinant of 

plankton composition — presents no 
simple dynamic model of relation- 
ships among the first three trophic 
levels. Attempts at management of 
disturbed lakes will, therefore, not 
only hope to achieve practical goals 
but also to test and extend the con- 
ceptual models. 

The Scientific Data Base 

In general, the data base for evalu- 
ating and extending knowledge of 
the trophic dynamic systems of the 
Great Lakes is inadequate. This dy- 
namic approach demands knowledge 
of the interrelationships of the ele- 
ments of the lake ecosystem, while 
all that is now available are unre- 
lated segments of data concerning 
various aspects of the ecosystem. 
Data on the seasonal changes in the 
physical and chemical parameters for 
more than a few stations at a time 
in any one lake have become avail- 
able only within the past decades. 
Attempts to relate these physico- 
chemical to biological changes have 
only been sporadic. Of the biologi- 
cal data, that on changes in the com- 
position of the fish stock is probably 
most nearly adequate. That on the 
plant and animal plankton, which 
comprise the bulk of the biomass, is 
spotty and inadequate. A recently 
published bibliography of the Great 
Lakes plankton studies lists over 400 
papers, but, as the bibliographer 

The biology and ecology of the 
plankton remains poorly known. 
Most papers are descriptive and 
concentrate heavily on taxonomy 
and distribution of certain orga- 
nisms. Experimental work on the 
dynamics of Great Lakes plankton 
is urgently needed in light of rap- 
idly changing environmental con- 
ditions and fluctuating fish stocks. 

The last sentence makes the essential 
point: Significant studies of the 
trophic dynamics involve simultane- 
ous studies of physico-chemical pa- 
rameters, the phytoplankton, the zoo- 

plankton, the planktivorous fi 
the piscivores. 

Various bits of work done recently 
in Lake Michigan can be put together 
to provide some insight into the 
trophic dynamics of that lake. This 
has provided the reassuring informa- 
tion that changes in the composition 
of the animal plankton following 
changes in stocks of planktivorous 
fish (establishment of alewives, to be 
specific) have been precisely what 
would be predicted from knowledge 
of the dynamics of much smaller 
lakes. Furthermore, the time required 
for the changes to be manifest in the 
animal plankton of Lake Michigan 
is not inordinately greater than the 
time required in smaller lakes. This 
is not surprising, because the total 
size of the system should be less 
significant than the mean ratio of 

Theoretical Formulations: 
Control from Above 

A recent theoretical formulation 
states that the composition of the 
first trophic levels in the open-water 
communities of large lakes is deter- 
mined in large measure by the selec- 
tive feeding habits of the planktivo- 
rous fish. The prey selections by the 
schools of zooplankton-eating fish 
directly determine the species com- 
position of the animal plankton. This 
indirectly affects the quantitative and 
qualitative composition of the phyto- 
plankton (algae, bacteria) because 
species of animal plankton differ in 
the effectiveness with which their 
populations can collect algae and 
other small particles from the lake 

Large crustacean zooplankters of 
the genus Daphnia play a crucial role 
in the indirect control of the first 
trophic level resulting from the selec- 
tive feeding of the third level. The 
large Daphnia are both the favorite 
food of freshwater planktivores and 
the most effective collectors of small 
particles (1-50 microns) from the 



medium. When planktivore stocks 
are sufficiently high, the populations 
of large Daphnia are reduced to in- 
significant numbers. Since the smaller 
crustacean competitors that replace 
them (see Figure VIII-2) are less 
effective in collecting small algae, the 
algal populations will tend to in- 
crease, making the lake water less 

This theory, in essence, states that 
the composition of the open-water 
community is determined by the 
trophic actions of the highest (third 
and fourth) trophic levels. The for- 
mulation suggests a management 
concept for controlling the effects of 
the continued enrichment pollution 
of the Great Lakes. In essence, the 
plan would be to reduce planktivore 
pressure in such a way as to maxi- 
mize the populations of Daphnia 
which are most effective in removing 
algae from suspension. The plank- 
ton-eating fish could be removed by 
man through fishing. Removing the 
fish would remove some "packets" 
of nitrogen and phosphorus in the 
lake ecosystem at the same time as it 
permitted the proliferation of Daph- 
nia. The fish themselves, depending 
on their species, could be variously 
used as human food, animal food, or 
as a source of oils and other material 
for chemical manipulation. 

The stocks of planktivores could 
also be kept in check by introducing 
and manipulating stocks of piscivo- 
rous fish. For example, the introduc- 
tion of coho salmon into Lake Mich- 
igan is an attempt at controlling the 
burgeoning population of the alewife 
(Alosa pseudoharengus — originally 
a marine planktivore that, despite its 
abundance in many freshwater lakes, 
is still imperfectly adapted to the 
peculiarities of a freshwater exist- 
ence). While this method of con- 
trolling planktivores has the ad- 
vantage of permitting the nitrogen 
and phosphorus to be removed in 
large packets that tend to find greater 
acceptance as human food, the total 
amount of these elements that could 
be extracted from the fourth trophic 

level of the lake is at most one- 
seventh of that which could be re- 
moved via the third. It is thus less 
satisfactory as a means of decreasing 
the total amount of nitrogen and 
phosphorus from a lake than is re- 
moval of fish from the third (plank- 
tivore) level. 

The entire matter of the use of 
the fish removed from the Great 
Lakes as food for man or beast has 
been complicated by the fact that 
various stable and toxic chlorine- 
containing compounds such as DDT, 
DDD, DDE, and PCB's are concen- 
trated in the oil and body fat of the 
fish of both trophic levels. 

Theoretical Formulations: 
Control from Below 

In contrast to the concept of con- 
trol of the composition of the open- 
water community indicated above, 
the alternate concept — widely held 
a decade ago — still has adherents. 

The control-from-below theory en- 
visions the composition of the com- 
munity as being primarily determined 
by competition within each trophic 
level. In this view, the composition 
of the first level — phytoplankton — 
is determined by the particular con- 
figuration of physico-chemical con- 
ditions at the season in question. 
The species composition of the sec- 
ond level — zooplankton — is deter- 
mined primarily by competition 
among populations of the various 
species of crustaceans and rotifers 
that could occur within the lake for 
the kinds of phytoplankton thriving 
at that moment. Each species is most 
effective in collecting only a portion 
of the total range of sizes and kinds 
of algae available. The planktivores 
feed on whichever species of zoo- 
plankter is available at the time. 

It can be appreciated, therefore, 
that changing the intensity of plank- 
tivore predation upon the zooplank- 
ton would be expected, by the 
control-from-below hypothesis, to al- 
ter the total quantity of zooplank- 

ton — but not necessarily its specific 
composition. Since this concept does 
not consider that planktivore preda- 
tion has any pronounced effect on 
the species composition of the zoo- 
plankton, there is no theoretical basis 
for attempting to modify the com- 
position and standing crop of the 
algae by manipulating the stock of 
planktivorous fish. 

Requirements for 
Scientific Activity 

Examination of the simultaneous 
changes in the abundance of all the 
various species that comprise each 
trophic level is necessary to evaluate 
the alternative concepts of trophic- 
dynamics outlined above. This is an 
enormous task, even in the Great 
Lakes where the variety of species 
on all levels is very much less than 
it would be in an equal volume of 
the ocean. 

The greatest difficulties of enumer- 
ation and categorization are pre- 
sented by the extremely numerous 
small organisms of the plankton. 
Automatic methods of counting the 
plankton and categorizing them ac- 
cording to size must be developed. 
The Coulter method of counting and 
sizing particles by the drop in elec- 
trical potential that each generates 
while passing through a small aper- 
ture through which an electric cur- 
rent passes is not entirely satisfac- 
tory. This data must be stored 
electronically so as to be immediately 
available for use with data on phys- 
ico-chemical conditions, on the one 
hand, and data on the characteristics 
of the fish populations, on the other. 

In addition to methods of auto- 
matic data collecting, it will be nec- 
essary to make provision for the 
proper taxonomic assignment of spe- 
cies of the plant and animal plankton. 
This information, gathered from ali- 
quots, must be applied to the auto- 
matically acquired data on size 
categories. At present this is an 
operation that is tedious at best and 
nearly impossible at worst. 





" 10 


LENGTH 0- 4 


= Cut Off 

Epischura \\ 





16 - 


5 mm. 





The histograms show the distribution and composition of crustacean zooplankton 
(as well as one predatory noncrustacean) before and after a population of Alosa 
pseudoharengus (alewives) became well established. The arrows indicate the 
size and the position in the distribution of the smallest mature instar of each 
dominant species. Such larger zooplankton as Daphnia were present, but they 
represented less than one percent of the total sample count. The triangles denote 
the lower limit or cut-off point of the zooplankton consumed by the several 
species of fish indicated. Note that with the advent of the alewives, the size 
distribution of the zooplankton was depressed significantly to smaller species. 



Seasonal changes as well as natural 
and man-induced changes in the 
fish stocks continually perturb the 
lake ecosystem. Continuous analysis 
of the perturbations of the plant and 
animal plankton should make it pos- 
sible to evaluate the concepts of 
trophic dynamics, leading to the de- 
velopment of techniques and concepts 

necessary for managing the Great 
Lakes so as to maximize both water 
quality and fish yield. 

The primary requirement is the 
assembly of a scientific staff together 
with the equipment and instrumenta- 
tion (ships and collecting gear) nec- 
essary for collecting extensive sam- 

ples. The samples should be converted 
into data as automatically as possi- 
ble. Taxonomic identification services 
should be established. Methods for 
data storage and rapid retrieval 
should be developed. Much could 
be done within five years toward the 
development of effective manage- 
ment concepts if a concerted effort 
were made along these lines. 

Effects of Artificial Disturbances on the Marine Environment 

The capability of predicting the 
specific consequence of a general 
disturbance of a natural community 
is basic to planning and evaluating 
environmental controls. Large sums 
of money and considerable effort 
could be saved if we could foresee 
the effects of a particular human 

History has taught us what to 
expect from the destruction of forests 
and prairies. But we cannot now 
predict, with any confidence, more 
subtle disturbances or the long-term 
cosmopolitan consequences of drastic 
change. This circumstance is rapidly 
changing. Recent theoretical devel- 
opments have directed our attention 
to new ways of looking at the prob- 
lem. There is reason to believe that 
it will soon be possible to predict 
change, at least in relatively simple 
ecosystems such as exist in the sea. 

Ecological Generalities 

Few long-term studies have been 
made on the changes that occur in 
natural communities. We must there- 
fore rely more on theory than ex- 
perience. It is now recognized that 
there is a fundamental relationship 
between the number of species, the 
number of individuals of any spe- 
cies, and the stability of the environ- 
ment. For example, there are fewer 
species with relatively larger numbers 
of individuals in severe or unstable 
environments than in environments 
whose fluctuations are predictable. 
If the environment becomes more 

stable in time, the number of species 
increases. If the environment is dis- 
turbed in any way, the number of 
species decreases. 

Succession and Regression — 
Around the turn of this century 
ecologists recognized that, wherever 
a land surface was laid bare, it was 
colonized by species in a regular 
order. It was possible to predict, on 
the basis of previous observations, 
which species of animals and plants 
would appear first and which would 
later replace the earliest immigrants. 
This process of succession of one 
natural community by another con- 
tinues until a stable climax commu- 
nity is reached. However, succession 
is a reversible process. Any disturb- 
ance will drive the climax community 
down to a lower level of succession. 
The disappearance of species is also 
in a more or less regular order. 

If we had data on the changes in 
all natural communities, we could 
predict the consequences of a general 
disturbance using the principle of 
succession. In the absence of such 
studies, there may be another way 
of obtaining relevant data: There is 
evidence that natural communities 
are continually responding to local 
variations in the stability of the en- 
vironment. Small-scale disturbances 
drive down part of the system with- 
out appreciably affecting other areas. 
If this is the case, a community can 
be viewed as a temporal mosaic, por- 
tions of which are at different levels 
of succession. In this circumstance, 
the variations in species composition 

observed in space could be similar 
to those observed in time. If samples 
taken throughout a natural commu- 
nity at one time are placed in order 
of diversity, the array should simu- 
late the order of species appearance 
or disappearance in succession or 

The Impact of Pollutants 

At least some of the changes asso- 
ciated with pollution resemble those 
observed in natural sequences. For 
example, the order in which marine 
species disappear as a sewage outfall 
is approached is often the reverse of 
the order in succession. Using this 
principle, we can take samples 
throughout an area, arrange them 
in order of diversity, and predict the 
changes that would occur in the 
vicinity of a proposed outfall. 

Some pollutants and other types of 
disturbances are probably specific in 
their effects upon communities, af- 
fecting some species more than oth- 
ers. Prediction in these cases will 
require knowledge of the physiologi- 
cal responses of particular species to 
the particular compound or disturb- 
ance. However, where the disturb- 
ance is general, as in pollution from 
domestic sewage or dredging, we 
should be able to predict the effects 
upon the community using the kinds 
of observations and samples now 
taken by ecologists. 

Prediction in Shallow-Water Com- 
munities — Simple communities, low 


in diversity, are strongly influenced 
by stresses imposed by the physical 
environment. Complex communities, 
high in diversity, tend to be more 
stable and integrated. It should be 
easier to predict change in the sim- 
pler, physically controlled communi- 
ties than in the complex, biologically 
controlled associations. 

Marine communities in shallow 
water appear to be simpler than those 
in deep-sea and terrestrial environ- 
ments. Therefore, the planktonic and 
benthic marine communities in shal- 
low water offer the greatest oppor- 
tunities to test hypotheses concern- 
ing succession and the relationship 
between environmental stability and 
diversity. This is fortunate, since 
these communities are of great eco- 
nomic importance and yet suffer the 
greatest exposure to artificial disturb- 
ances. If we can perfect methods of 
prediction in shallow-water commu- 
nities in the next several years, there 
will still be time to develop the 
economic and political institutions 
needed to prevent the wholesale de- 
gradation of these important eco- 

Needed Scientific Activity 

In the next several years we will 
need to perform field and laboratory 
experiments explicitly designed to 
test the growing body of ecological 
theory. For the purpose of develop- 
ing our prediction capability, we 
should perform such experiments in 
areas that are undergoing or about to 
undergo artificial stress. 

Ecological surveys are now com- 
monly made in connection with pro- 

posed reactor installations or sewage 
outfalls. While such studies vary 
tremendously in quality, most are 
worthless. Most are poorly designed 
without any regard to previous ex- 
perience or theory. It is not possible 
to generalize from the data obtained 
from most of these surveys because 
of the great differences in the meth- 
ods of sampling and analysis used. 
One of the most pressing needs in 
applied marine ecology is the devel- 
opment of high and uniform stand- 
ards for the performance of routine 
ecological surveys. 

Monitoring — At the state and na- 
tional level, it would be highly de- 
sirable to develop programs to moni- 
tor environmental events. We could 
maximize the use of data obtained 
from the study of artificial disasters 
if such studies were performed by 
highly trained teams of observers. 
High school and college biology 
teachers might be enlisted in this 
effort. It would not be difficult to 
cover the coastlines of highly popu- 
lated areas such as California. Cen- 
ters for environmental control could 
be established to train teams of ob- 
servers, to develop standards of per- 
formance, and to collate and analyze 
data. Such data would be of immeas- 
urable value in designing basic re- 
search programs and in developing 
environmental controls. 

Research and Training — On a 
long-term basis, we must continue to 
support basic research in population 
dynamics. In shallow-water commu- 
nities there is a particular need to 
place more emphasis on larval re- 
cruitment. Our understanding of the 
temporal changes in benthic marine 
communities is severely limited by 


our lack of knowledge 

It is essential to expand research 
and training in systematic biology. 
Systematics remains as the founda- 
tion of nearly all ecological research. 
Yet our attempts to attract talent and 
support in these areas are feeble. 
The major museums of this country 
should be the focal points of this 
effort, but they are suffering decay 
and neglect. 

Scientific Preserves — In the long 
term, it is important to establish large 
scientific preserves to serve as stand- 
ards of environmental quality, as 
natural laboratories, and as sources 
of larvae for the maintenance of 
species elsewhere. We must begin 
this program as soon as possible, for 
few areas remain suitable for these 
purposes along our coasts. 

In conclusion, there is reason to 
believe that we will have a limited 
capability of predicting changes in 
natural communities within the com- 
ing decade. This capability will be 
greatly expanded by the rapid devel- 
opment of ecological theory and the 
performance of critical experiments 
in natural communities. To achieve 
these goals, we should increase basic 
research in systematic biology and 
population dynamics, establish scien- 
tific preserves, and develop programs 
to monitor environmental events. If 
we begin now, we may be able to 
halt the degradation of the marine 
environment as early as 1990. If we 
do not begin now, we will reduce the 
natural communities along our coasts 
to a level where their contribution to 
our economy and general welfare will 
be trivial. 

Marine Flora and Fauna in the Antarctic 

The environment of the antarctic 
seas is less variable than that of 
temperate latitudes with respect to 
temperature and salinity, but the 
quality of light throughout the year 
may be quite different because of the 
long periods of light and dark and the 

winter ice cover. In many parts of 
the antarctic, especially near the con- 
tinental margin, the temperature 
of the ocean water is near 0" cen- 
tigrade or below, and nowhere in 
the regions known as "antarctic" — 
that is, south of the Antarctic Con- 

vergence — are surface waters warmer 
than 1.0° centigrade. In deeper water 
the temperature is almost constantly 
around —1.8° centigrade. As the 
Canadian biologist Dunbar has 
pointed out, a cold constant tempera- 
ture is not a limiting factor for the 



development of life, and the antarctic 
seas are rich and immensely produc- 
tive, at least near the surface and at 
shallow depths. 

Marine Life of Special 
Interest to Man 

Oxygen and nutrients are high in 
these cold waters, as might be ex- 
pected from the abundance of life in 
them. Two centuries ago man drew 
heavily on the stocks of seals of the 
sub-antarctic islands; more recently, 
he has reduced the stocks of blue 
whales to such low levels that it is 
no longer economical to pursue them. 

Recently there have been discus- 
sions of utilizing the vast populations 
of the krill, Enphausia superba, which 
are the principal food of the blue 
whales, the Adelie penguins, and 
several kinds of fishes. It is esti- 
mated that the total populations of 
krill are equal to all the rest of the 
fisheries of the world, at least in gross 
tonnage, or about 60 million metric 
tons. However, the krill occurs in 
patches and the small size of the 
individuals poses difficult processing 
problems. Also, the animals are 
"tender" — that is, they must be 
processed immediately. For these 
reasons, immediate extensive use of 
this resource appears unlikely. Among 
other significantly abundant fishes 
are representatives of the family 
Nototheniidae; these are currently 
being fished on an experimental basis 
by the Soviet Union. 

There seems to be less fisheries 
potential in the shallow-water or sea- 
bottom life, which is often abundant 
and varied but lacks the extensive 
beds of large bivalves found in arctic 
waters. Large seaweeds are abundant 
around the sub-antarctic islands and 
near the shores of the Antarctic 
Peninsula, and invertebrate popula- 
tions are large in the vicinity of 
McMurdo Sound and the Soviet base 
in the Davis Sea. Most of the as- 
semblage consists of such organisms 
as sponges, bryozoa, and echino- 

derms, of little potential commercial 
value. The bottom fauna is of con- 
siderable theoretical interest because 
of its apparently stable or slowly 
changing composition, at the same 
time combined with a diversity of 
components comparable to that of 
the Indo-Pacific coral reef environ- 

The rates of turnover or replace- 
ment of the antarctic fauna have yet 
to be worked out in the detail neces- 
sary for rational harvest of the fish- 
eries stocks, but the unfortunate his- 
tory of the blue whale suggests that 
our relations to the fishery resources 
of the antarctic will be governed pri- 
marily by socio-economic rather than 
ecological considerations. That is, we 
will simply fish until stocks are so 
reduced that it becomes unprofitable 
to expend the effort and funds neces- 
sary to keep the catch up. 

Examples of Adaptation 

The adaptations and peculiarities 
of the flora and fauna of the shallow 
waters near the antarctic continent 
are of great scientific and theoretical 
interest. Two of the most interesting 
concern the adaptation of fishes to 
water that is below freezing by the 
production of a sort of natural anti- 
freeze substance (according to one 
researcher) or to a higher concentra- 
tion of salt in the blood (according to 
another); other fish adapt to the low 
temperature and high oxygen by de- 
veloping the ability to function with- 
out hemoglobin. The disagreement 
between deVries, who finds that cer- 
tain fishes may resist freezing because 
of a protein containing carbohydrate 
in their blood, as contrasted with 
Smith's observation that this is ef- 
fected by increased salt, should stim- 
ulate more intensive and critical work 
on the blood of antarctic fishes. 

The adaptations of the Weddell 
seal, the southernmost mammal, are 
of particular interest. This animal is 
capable of diving for periods of more 
than 40 minutes to depths of 400 

meters (about 1,200 feet), can swim 
under water for at least two miles, 
and has excellent sense of direction 
under water. A thorough understand- 
ing of the physiology of this mammal 
will help us to understand the prob- 
lems of diving, which is an increas- 
ingly significant activity in man's ex- 
panding use of the sea. 

Status of Scientific Activity 

At the present time there is con- 
siderable interest in the nature and 
significance of diversity in the sea — 
that is, whether a high ratio of differ- 
ences to total numbers of all kinds or 
abundances is related to a situation 
that may be in equilibrium or indica- 
tive of a long-established condition, 
or whether, conversely, a low pro- 
portion of different kinds of species 
indicates recent, temporary, or chang- 
ing conditions. Many pollution pro- 
grams are predicated on the idea that 
diversity may be associated with 
stable and presumably favorable or 
optimum conditions. As yet we lack 
adequate data to ascertain whether 
or not diversity exists and what it 
may signify, especially for situations 
at the bottom of the sea. 

The benthic environment of the 
antarctic should provide us with use- 
ful information on this controversial 
problem because it appears to be a 
comparatively unchanging environ- 
ment with a rich variety of species. 
The problem will require a more in- 
tensified level of field ecological work 
on a year-round basis than is being 
done at present, at least by U.S. re- 
searchers. It is in this area that 
theoretical formulation and mathe- 
matical modeling (already being at- 
tempted for situations in other re- 
gions) would be most appropriate, 
but we still lack the data base. For 
example, we are still unable to evalu- 
ate data concerning diversity in dif- 
ferent regions of the antarctic. 

Physiological aspects seem to be 
much better in hand; a concerted 
attack on some of these problems is 



under way by a group on board the 
R. V. Alpha Helix. 

Instrumentation — We are reason- 
ably well equipped, especially in 
physiology, to undertake antarctic 
studies, although details of apparatus 
can always be refined. One problem 
that seems to plague divers in partic- 
ular is the vulnerability of photo- 
graphic equipment in the cold antarc- 

tic waters; various kinds of seals 
continue to break down and put 
cameras out of commission. We 
need some functioning under-water 
photomonitoring systems for the 
dangerous antarctic waters in order 
to obtain information under winter 
conditions near the bases. 

Manpower — Our principal re- 
quirement is interested manpower in 

order to expand field ecolog 
grams in the next five years b 
duce data relevant to theoretical 
ideas in ecology at a scale to keep up 
with such work elsewhere. Obvi- 
ously, there is need for some sort of